首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
After a statement of the general problem underlying Quine's methods of simplifying logical expressions, a few examples, and a survey of various approaches to the problem, attention is focussed on the question of how to branch when the straightforward simplication rules give no further progress. An algorithm is suggested that takes advantage of the freedom of choice at the branching step in order to split the given problem into several smaller problems. As the difficulty of the problem grows exponentially with its size, this results in a great saving of effort. Both the hand and computer versions of the algorithm are described since they differ appreciably. The pattern recognition example used to illustrate the paper is chosen as typical of the wide variety of practical questions in which the above general problem arises.  相似文献   

2.
We propose an exact algorithm for counting the models of propositional formulas in conjunctive normal form. Our algorithm is based on the detection of strong backdoor sets of bounded size; each instantiation of the variables of a strong backdoor set puts the given formula into a class of formulas for which models can be counted in polynomial time. For the backdoor set detection we utilize an efficient vertex cover algorithm applied to a certain “obstruction graph” that we associate with the given formula. This approach gives rise to a new hardness index for formulas, the clustering-width. Our algorithm runs in uniform polynomial time on formulas with bounded clustering-width. It is known that the number of models of formulas with bounded clique-width, bounded treewidth, or bounded branchwidth can be computed in polynomial time; these graph parameters are applied to formulas via certain (hyper)graphs associated with formulas. We show that clustering-width and the other parameters mentioned are incomparable: there are formulas with bounded clustering-width and arbitrarily large clique-width, treewidth, and branchwidth. Conversely, there are formulas with arbitrarily large clustering-width and bounded clique-width, treewidth, and branchwidth.  相似文献   

3.
4.
王华  晏磊  钱旭  朱明 《微机发展》2007,17(9):25-27
水下运载体的导航主要以惯性导航为主,但是,由于水下运载体在海底的运行速度比较慢,惯性导航系统随着时间的累积出现了定位误差增大的缺点,因此需要地形辅助导航来纠正惯导的误差。文中将地形熵和地形差异熵的概念引入到水下地形匹配算法中,设计了基于地形熵和地形差异熵的综合地形匹配算法,并将该算法应用于水下地形辅助导航系统中,经验证该算法取得了良好的纠正惯导定位的效果,实现了水下地形辅助导航的任务。  相似文献   

5.
基于地形熵和地形差异熵的综合地形匹配算法   总被引:5,自引:0,他引:5  
水下运载体的导航主要以惯性导航为主,但是,由于水下运载体在海底的运行速度比较慢,惯性导航系统随着时间的累积出现了定位误差增大的缺点,因此需要地形辅助导航来纠正惯导的误差。文中将地形熵和地形差异熵的概念引入到水下地形匹配算法中,设计了基于地形熵和地形差异熵的综合地形匹配算法,并将该算法应用于水下地形辅助导航系统中,经验证该算法取得了良好的纠正惯导定位的效果,实现了水下地形辅助导航的任务。  相似文献   

6.
Location-dependent data are central to many emerging applications, ranging from traffic information services to sensor networks. The standard pull- and push-based data dissemination models become unworkable since the data volumes and number of clients are high.We address this problem using locale covers, a subset of the original set of locations of interest, chosen to include at least one location in a suitably defined neighborhood of any client. Since location-dependent values are highly correlated with location, a query can be answered using a location close to the query point. Typical closeness measures might be Euclidean distance, or a k-nearest neighbor criterion.We show that location-dependent queries may be answered satisfactorily using locale covers. Our approach is independent of locations and speeds of clients, and is applicable to mobile clients.We also introduce a nested locale cover scheme that ensures fair access latencies, and allows clients to refine the accuracy of their information over time. We also prove two important results: one regarding the greedy algorithm for sensor covers and the other pertaining to randomized locale covers for k-nearest neighbor queries.  相似文献   

7.
8.
在粗糙集不确定性度量公式中,模糊熵和模糊度是重要的度量方式。根据粗糙集不确定性度量中模糊熵和新的模糊度公式,提出了在决策信息系统中修正条件信息熵和相对模糊熵的概念,并分别用两种方式证明了熵在属性约简过程中的单调性。然后利用向前添加属性算法进行属性约简,约简结果在RIDAS(roughset based intelligent data analysis system)平台上进行识别率测试,通过实验对比分析了两种新的信息熵与条件信息熵的约简结果,为基于信息熵的属性约简提供了参考。  相似文献   

9.
基于估计熵的模糊滤波   总被引:4,自引:0,他引:4  
介绍了近年来出现的一种度量信号复杂程度的统计量——估计熵(Approximateentropy),并将其用于信号的模糊滤波。这种滤波器具有能随信号的特征自动调节滤波器结构的优点,特别是当信号为非平稳时有较好的效果。这种滤波方法具有一定的自适性,适合于工程应用。  相似文献   

10.
基于信息熵的DNA免疫遗传算法   总被引:5,自引:0,他引:5  
郑建刚  王行愚 《计算机仿真》2006,23(6):163-165,208
针对标准遗传算法在优化应用中遇到的诸如局部搜索能力差、计算量大、对较大搜索空间适应能力差和早熟收敛等问题,该文通过将免疫算法引入到遗传算法中,利用免疫算法的免疫记忆、自我调节和多样性保持功能弥补其不足,提出了一种基于信息熵的DNA免疫遗传算法.该算法采用DNA链对抗体进行编码,利用信息熵来表示抗体间的亲和度及浓度,并提出了一种新的评估指标--聚合亲和度,有效地实现了抗体群的自我调节和多样性保持策略.最后,利用典型测试函数验证了本文方法的有效性.  相似文献   

11.
密码是电子认证的基础,密码的保护以及密码强度或抗破解能力对受保护信息或信息系统的安全至关重要。目前还没有一个准确定义密码强度的方法,基于熵的概念,借助猜测熵和最小熵对密码强度进行大致估计,将对密码的选择具有一定的指导作用。本文介绍了基于熵对密码的强度进行估计的方法。  相似文献   

12.
基于熵的运动目标检测   总被引:6,自引:10,他引:6  
目前,运动目标的检测是计算机视觉领域中最活跃的研究主题之一。本文介绍了光流法、帧间差分等运动目标的检测方法,提出了一种基于熵的运动目标检测方法,实验结果表明了该方法的鲁棒性和有效性。  相似文献   

13.
一种基于信息熵的Internet宏观行为模型研究   总被引:2,自引:0,他引:2  
具有开放、分布式、不协作、异构、无中心控制等特点的Internet复杂系统的管理、规划以及新一代网络体系结构的设计都离不开对网络行为的充分理解。该文从系统科学的角度出发提出了Internet系统结构三维模型;并在该三维模型的基础上利用信息熵原理建立了用于分析Internet宏观行为的Internet系统信息熵模型,根据信息熵模型可以分析Internet各组成部分之间的相互作用关系以及它们怎样引起Internet的宏观行为;最后,给出了信息熵模型的有效性和稳定性定量分析的判别式,其中稳定性判别式可以作为Internet宏观行为的整体控制的一个重要依据。  相似文献   

14.
15.
基于二维Arimoto熵的阈值分割方法   总被引:1,自引:0,他引:1  
提出基于二维Arimoto熵的阈值分割方法.首先由图像的像素值及其邻域像素均值得到图像的二维直方图,然后从二维直方图中计算出二维Arimoto熵.当二维Arimoto熵达到最大时,对应的灰度级对即为分割阈值.通过引入二维联合幂概率分布建立快速算法,使算法速度大大提高,易于硬件实现.大量的对比实验表明,本文算法表现稳定,总体的分割效果优于基于二维Renyi熵和二维Shannon熵的阈值分割算法.  相似文献   

16.
基于脆性熵的系统脆性研究   总被引:3,自引:0,他引:3  
避开系统复杂的内部结构,直接从系统输入输出入人手,建立一种包含外部环境输入和系统输出的系统脆性分析模型,提出由外部环境蝗不确定性和其结果的危害性两方面因素共同刻画系统脆性的概率风险的方法,定义系统脆性风险函数和脆性熵的表达式,剖析脆性熵的主要数学特性,对系统脆性的概率风险进行定量分析。  相似文献   

17.
该文研究连续属性的离散化问题。首先,详细介绍了基于熵的离散化算法(EBD),并对其存在的问题进行了分析。随后,给出了用于度量区间密度的定义;接着,在自适应思想的启发下,对EBD算法进行了改进,提出了基于熵的变阀值离散化算法,区间密度的引入使得该算法能够随样本集在区间上密度的变化适当调整熵的阀值。实验结果表明,与EBD算法相比,改进算法不仅保持简单性、一致性和精确性,而且容易操作。  相似文献   

18.
针对基于数据流检测木马检测系统的实际需要,提出一种基于信息熵的数据流加密判断算法,引入N-截断熵的概念用于置信区间的计算,并通过仿真建立了可靠的置信区间.该算法通过检测一条数据流的一个数据包,就可以判断整条数据流是否加密,有非常好的效率,可以达到实时在线判断,通过实验验证,算法具有很高的准确率和很低的误报率,算法已应用于基于数据流检测的木马检测系统,完全达到系统要求.  相似文献   

19.
Tight Results on Minimum Entropy Set Cover   总被引:1,自引:0,他引:1  
In the minimum entropy set cover problem, one is given a collection of k sets which collectively cover an n-element ground set. A feasible solution of the problem is a partition of the ground set into parts such that each part is included in some of the k given sets. Such a partition defines a probability distribution, obtained by dividing each part size by n. The goal is to find a feasible solution minimizing the (binary) entropy of the corresponding distribution. Halperin and Karp have recently proved that the greedy algorithm always returns a solution whose cost is at most the optimum plus a constant. We improve their result by showing that the greedy algorithm approximates the minimum entropy set cover problem within an additive error of 1 nat =log 2 e bits ≃1.4427 bits. Moreover, inspired by recent work by Feige, Lovász and Tetali on the minimum sum set cover problem, we prove that no polynomial-time algorithm can achieve a better constant, unless P = NP. We also discuss some consequences for the related minimum entropy coloring problem. G. Joret is a Research Fellow of the Fonds National de la Recherche Scientifique (FNRS).  相似文献   

20.
该文研究连续属性的离散化问题。首先,详细介绍了基于熵的离散化算法(EBD),并对其存在的问题进行了分析。随后,给出了用于度量区间密度的定义;接着,在自适应思想的启发下,对EBD算法进行了改进,提出了基于熵的变阀值离散化算法,区间密度的引入使得该算法能够随样本集在区间上密度的变化适当调整熵的阀值。实验结果表明,与EBD算法相比,改进算法不仅保持简单性、一致性和精确性,而且容易操作。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号