首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
利用粒计算位模式方法对粗糙关系数据库(Rough Relational Database,RRDB)的粗糙函数依赖的度量问题进行研究。首先对粗糙函数依赖、RRDB中粗糙上下近似进行了分析,然后利用位模式表示粗糙关系的属性值,在此基础上给出了位模式利用粒计算方法对粗糙函数依赖进行度量的模型,并研究了其满足的性质。  相似文献   

2.
为了刻画和处理不确定XML数据,利用粒计算方法研究粗糙XML树信息系统中粗糙XML函数依赖的判定问题。基于粗糙集给出粗糙XML树信息系统的上近似、下近似的定义,借助粗糙相似关系进一步给出粗糙XML函数依赖;分析如何利用位模式表示粗糙XML树信息系统中的信息值;提出粗糙XML树信息系统中路径间的依赖关系的判定算法,并对算法的时间复杂性进行了分析。实例分析表明,信息值采用位模式时,数据格式更接近机器的内部表示,该方法可以快速判定粗糙XML函数依赖,算法的运算效率与速度也得到了提高。  相似文献   

3.
粗糙域Voronoi图离散生成算法研究   总被引:3,自引:0,他引:3  
Voronoi图是计算几何的一个重要分支,粗糙域Voronoi图是Voronoi图概念在复杂生成面上的扩展。提出了粗糙域Voronoi图的概念并利用A‘算法计算生成面上点与各母点的最短路径对其进行离散生成。为了降低粗糙域Voronoi图离散生成算法的复杂度,对粗糙域下A’算法估价函数权值与粗糙域粗糙特性的关系进行了深入探索。实验结果表明,A’算法估价函数权值与粗糙域粗糙特性正相关,并以此获得r算法估价函数的最优权,大大降低了粗糙域Voronoi图离散生成算法的复杂度。  相似文献   

4.
A*算法的核心问题是估价函数的构造及其权值确定问题。通常估价函数构造和权值确定主要依赖实验和经验法,这样构造的估价函数粗略,算法的复杂度较高且实时性差。为了解决这一问题,本文通过对粗糙域粗糙属性概率分布的分析,得出基于粗糙域A*算法估价函数的最优权与粗糙属性概率分布的标准差的相关性,并通过实验对结果进行了验证。实验结果表明最优权的确定使A*算法的复杂度明显降低,且能够满足实时应用的需要。  相似文献   

5.
A*算法的核心问题是估价函数的构造及其权值确定问题.通常估价函数构造和权值确定主要依赖实验和经验法,这样构造的估价函数粗略,算法的复杂度较高且实时性差.为了解决这一问题,本文通过对粗糙域粗糙属性概率分布的分析,得出基于粗糙域A*算法估价函数的最优权与粗糙属性概率分布的标准差的相关性,并通过实验对结果进行了验证.实验结果表明最优权的确定使A*算法的复杂度明显降低,且能够满足实时应用的需要.  相似文献   

6.
粒计算(GranularComputing,简称GrC)是一种新的软计算方法。该文利用信息颗粒的位表示(BitRepresenta-tions)来进行信息系统软规则及其度量之间关系的研究。具体地,首先利用软规则对关联规则、决策规则、函数依赖之间的关系进行了分析,然后对关联规则度量、决策规则度量、外延的函数依赖度量的关系进行了研究,并且建立了这些度量的统一模型。  相似文献   

7.
杨洁    王国胤      张清华     《智能系统学报》2020,15(1):166-174
在粒计算理论中,通过不同的粒计算机制可以生成不同的粒结构。在粗糙集中,对于同一个信息表而言,通过不同的属性添加顺序可以得到由不同的序贯层次结构,即粗糙粒结构。在粗糙粒结构中,不同的属性获取顺序导致了对不确定性问题求解的不同程度。因此,如何有效评价粗糙粒结构是一个值得研究的问题。本文将从知识距离的角度研究这个问题。首先,在前期工作所提出的知识距离框架上提出了一种粗糙近似空间距离,用于度量粗糙近似空间之间差异性。基于提出的知识距离,研究了粗糙粒结构的结构特征。在粗糙粒结构中,在对不确定性问题进行求解时,本文希望在约束条件下可以利用尽可能少的知识空间使不确定性降低达到最大化。基于这个思想并利用以上得出的结论,在属性代价约束条件下,引入了一个评价参数λ,并在此基础建立了一种粗糙粒结构的评价模型,该方法实现了在属性代价约束条件下选择粗糙粒结构的功能。最后,通过实例验证了本文提出的模型的有效性。  相似文献   

8.
基于粗糙粒模型的图像纹理识别和检索   总被引:1,自引:0,他引:1  
传统的纹理识别方法大多是对图像频谱的研究,文中尝试以粒计算理论为基础,利用分层思想对图像的纹理特征进行识别。首先,通过引入粒的边缘和分层熵的概念,建立粗糙粒理论,构造粗糙粒度空间模型。然后,构建基于粒的边缘和分层熵的相似度计算方法,得出一种图像纹理识别方法。该方法不仅提高模型在图像纹理识别上的实用性,而且通过对识别和检索过程的同步进行简化纹理识别的计算过程。最后,仿真实验表明,该模型及所用到的相关方法是可行的,与其它方法相比,该方法识别和检索效果较好。  相似文献   

9.
特征选择指在保持数据分类性能不变的同时,选出不含冗余特征的特征子集。粗糙超立方体方法可从特征相关度、依赖度和重要度这3方面对特征子集进行综合评估,已成功用于特征选择。特征子集组合的计算是一个NP-难问题,而传统的前向搜索策略只能得到局部最优结果。因此,本文设计了一种新的离散粒子群优化与粗糙超立方体方法相结合的算法。该算法首先引入相关度用以生成一组粒子,然后对粗糙超立方体方法的目标函数改进后作为优化函数,最后由粒子群迭代优化,找到最优的特征子集。实验结果表明,相比传统粗糙超立方体方法和采用粒子群优化的粗糙集方法,本文算法能够得到具有更小特征数量和更高分类性能的特征子集。  相似文献   

10.
粗糙集的不确定性度量是粗糙集理论的重要研究内容之一。结合模糊理论和粒计算理论改进了粗糙集的不确定性度量方法。通过集合的相对知识粒度及边界熵给出了粗糙集的粗糙性度量函数与模糊性度量函数,随着近似空间知识粒的细分,粗糙集的粗糙度与模糊度均满足单调递减的性质。利用矩阵理论提出了易于实现的粗糙性度量与模糊性度量的矩阵算法。  相似文献   

11.
In this paper, we consider functional dependencies among Boolean dependencies (BDs, for short). Armstrong relations are defined for BDs (called BD-Armstrong relations). For BDs, two necessary and sufficient conditions for the existence of BD-Armstrong relations are given. A necessary and sufficient condition for the existence of Armstrong relations for functional dependencies (FDs, for short) is given, which in some sense is more convenient than the condition given in [3]. We give an algorithm that solves the problem of deciding if two BDs imply the same set of functional dependencies. If the BDs are given in perfect disjunctive normal form, then the algorithm requires only polynomial time. Although Mannila and Räihä have shown that for some relations exponential time is needed for computing any cover of the set of FDs defined in this relation, as a consequence, we show that the problem of deciding if two relations satisfy the same set of FDs can be solved in polynomial time. Another consequence is a new correspondence of the families of functional dependencies to the families of Sperner systems. By this correspondence, the estimate of the number of databases given previously in [6] is improved. It is shown that there is a one-to-one correspondence between the closure of the FDs that hold in a BD and its so-calledbasic cover. As applications of basic covers, we obtain a representation of a key, the family of minimal keys and a representation of canonical covers.This research was supported by the Hungarian Foundation for Scientific Research, Grant Nos. OTKA 2575, 2149.  相似文献   

12.
Inferring dependencies from relations: a conceptual clustering approach   总被引:1,自引:0,他引:1  
In this paper we consider two related types of data dependencies that can hold in a relation: conjunctive implication rules between attribute‐value pairs, and functional dependencies. We present a conceptual clustering approach that can be used, with some small modifications, for inferring a cover for both types of dependencies. The approach consists of two steps. First, a particular clustered representation of the relation, called concept (or Galois ) lattice , is built. Then, a cover is extracted from the lattice built in the earlier step. Our main emphasis is on the second step. We study the computational complexity of the proposed approach and present an experimental comparison with other methods that confirms its validity. The results of the experiments show that our algorithm for extracting implication rules from concept lattices clearly outperforms an earlier algorithm, and suggest that the overall lattice‐based approach to inferring functional dependencies from relations can be seen as an alternative to traditional methods.  相似文献   

13.
We present an approach for mining frequent conjunctive in arbitrary relational databases. Our pattern class is the simple, but appealing subclass of simple conjunctive queries. Our algorithm, called Conqueror $^+$ , is capable of detecting previously unknown functional and inclusion dependencies that hold on the database relations as well as on joins of relations. These newly detected dependencies are then used to prune redundant queries. We propose an efficient database-oriented implementation of our algorithm using SQL and provide several promising experimental results.  相似文献   

14.
《Information Systems》1999,24(7):597-612
Query rewriting using views is a technique for determining how a query may be answered using a given set of resources, which may include materialized views, cached results of previous queries, or queries answerable by other databases. The power of query rewriting can be considerably enhanced by taking into account integrity constraints that are known to hold on base relations. This paper describes an extension of query rewriting that utilizes inclusion dependencies to find rewritings of queries that would otherwise be overlooked. We describe a complete strategy for finding rewritings in the presence of inclusion dependencies and present a basic algorithm that implements that strategy. We also describe extensions to this algorithm when both inclusion and functional dependencies are considered.  相似文献   

15.
Functional dependencies in relational databases are investigated. Eight binary relations, viz., (1) dependency relation, (2) equipotence relation, (3) dissidence relation, (4) completion relation, and dual relations of each of them are described. Any one of these eight relations can be used to represent the functional dependencies in a database. Results from linear graph theory are found helpful in obtaining these representations. The dependency relation directly gives the functional dependencies. The equipotence relation specifies the dependencies in terms of attribute sets which functionally determine each other. The dissidence relation specifies the dependencies in terms of saturated sets in a very indirect way. Completion relation represents the functional dependencies as a function, the range of which turns out to be a lattice. Depletion relation which is the dual of the completion relation can also represent functional dependencies and similarly can the duals of dependency, equipotence, and dissidence relations. The class of depleted sets, which is the dual of saturated sets, is defined and used in the study of depletion relations.  相似文献   

16.
In this paper, we propose an efficient rule discovery algorithm, called FD_Mine, for mining functional dependencies from data. By exploiting Armstrong’s Axioms for functional dependencies, we identify equivalences among attributes, which can be used to reduce both the size of the dataset and the number of functional dependencies to be checked. We first describe four effective pruning rules that reduce the size of the search space. In particular, the number of functional dependencies to be checked is reduced by skipping the search for FDs that are logically implied by already discovered FDs. Then, we present the FD_Mine algorithm, which incorporates the four pruning rules into the mining process. We prove the correctness of FD_Mine, that is, we show that the pruning does not lead to the loss of useful information. We report the results of a series of experiments. These experiments show that the proposed algorithm is effective on 15 UCI datasets and synthetic data.  相似文献   

17.
The problem of database normalization in a parallel environment is examined. Generating relation schemes in third normal form is straightforward when given a set of functional dependencies that is a reduced cover. It is shown that a reduced cover for a set of functional dependencies can be produced in parallel. The correctness of the algorithm is based on two important theorems. it is demonstrated that the companion third normal form algorithm can be easily translated into a parallel version. The performance of the two algorithms is compared to the performance of their serial counterparts. The standard serial algorithms for computing minimal covers and synthesizing third normal form relations are presented. The parallel algorithms and their rationale are discussed  相似文献   

18.
In this paper, we present a new method for computing fuzzy functional dependencies between attributes in fuzzy relational database systems. The method is based on the use of fuzzy implications. A literature analysis has shown that there is no algorithm that would enable the identification of attribute relationships in fuzzy relational schemas. This fact was the motive for development a new methodology in the analysis of fuzzy functional dependencies over a given set of attributes. Solving this, not so new problem, is not only research challenge having theoretical importance, but it also has practical significance. Possible applications of the proposed methodology include GIS, data mining, information retrieval, reducing data redundancy in fuzzy relations through implementation of logical database model, estimation of missing values etc.  相似文献   

19.
The theory of functional dependencies is based on relations, i.e. sets of tuples. Over relations, the class of functional dependencies subsumes the class of keys. Commercial database systems permit the storage of bags of tuples where duplicate tuples can occur. Over bags, keys and functional dependencies interact differently from how they interact over relations.We establish finite ground axiomatizations of keys and functional dependencies over bags, and show a strong correspondence to goal and definite clauses in classical propositional logic. We define a syntactic Boyce-Codd-Heath Normal Form condition, and show that the condition characterizes schemata that will never have any redundant data value occurrences in their instances. The results close the gap between the existing set-based theory of data dependencies and database practice where bags are permitted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号