首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.  相似文献   

2.
Hyperspectral anomaly detection (HAD) is a branch of target detection which tries to locate pixels that are spectrally or spatially different from their background. In this paper, a visual attention approach is developed to leverage HAD. Traditional HAD methods often try to locate anomalous pixels based on spectral information. However, the spatial features of hyperspectral datasets provide valuable information. Here, we aim to fuse spatial and spectral anomaly features based on bottom-up (BU) and top-down (TD) visual attention mechanisms. Owe to the BU attention, spatial features are extracted by mimicking the primary visual cortex neurons functionality. Also, spectral information is obtained throughout a deep neural network that imitating the TD visual attention. The BU and TD approaches’ results are then integrated to provide both spectral and spatial information. The key findings of our results demonstrate the proposed method outperforms the six state-of-the-art AD methods based on different evaluation metrics.  相似文献   

3.
Good representative dictionaries is the most critical part of the BoVW: Bag of Visual Words scheme, used for such tasks as category identification. The paradigm of learning dictionaries from datasets is by far the most widely used approach and there exists a plethora of methods to this effect. Dictionary learning methods demand abundant data, and when the amount of training data is limited, the quality of dictionaries and consequently the performance of BoVW methods suffer. A much less explored path for creating visual dictionaries starts from the knowledge of primitives in appearance models and creates families of parametric shape models. In this work, we develop shape models starting from a small number of primitives and develop a visual dictionary using various nonlinear operations and nonlinear combinations. Compared with the existing model-driven schemes, our method is able to represent and characterize images in various image understanding applications with competitive, and often better performance.  相似文献   

4.
The use of visual search for knowledge gathering in image decision support   总被引:1,自引:0,他引:1  
This paper presents a new method of knowledge gathering for decision support in image understanding based on information extracted from the dynamics of saccadic eye movements. The framework involves the construction of a generic image feature extraction library, from which the feature extractors that are most relevant to the visual assessment by domain experts are determined automatically through factor analysis. The dynamics of the visual search are analyzed by using the Markov model for providing training information to novices on how and where to look for image features. The validity of the framework has been evaluated in a clinical scenario whereby the pulmonary vascular distribution on Computed Tomography images was assessed by experienced radiologists as a potential indicator of heart failure. The performance of the system has been demonstrated by training four novices to follow the visual assessment behavior of two experienced observers. In all cases, the accuracy of the students improved from near random decision making (33%) to accuracies ranging from 50% to 68%.  相似文献   

5.
许芳 《光机电信息》2008,25(7):43-47
本文介绍了可靠性信息系统,包括可靠性数据收集系统和数据分析系统.将先进的数据挖掘技术应用于可靠性信息系统中,可以从中提取有用的知识,作为可靠性分析和决策的依据.本文还详细讨论了一种具体的数据挖掘技术--判定树归纳技术的基本概念、流程和应用过程.  相似文献   

6.
基于模糊模式与决策树融合的脚本病毒检测算法   总被引:2,自引:0,他引:2  
构建决策树进行脚本病毒检测可以全面利用训练样本的信息,在样本特征较为复杂、样本数较大的情况下会产生大量节点,计算时间复杂度高,在剪枝过程中影响分类准确度。为融合模糊模式的信息以提高分类器性能,该文设计了决策树分类基础上的融合算法。该算法将关于模糊模式贴近度的3个特性作为决策树样本信息向量中的属性。使用训练样本集,根据上述属性在划分点上的分裂信息值及信息增益率选择分裂属性,逐步构建决策树。实验结果验证了算法的稳定性与准确度,表明这种融合方法可增加属性的区分度,减少决策树的分支数。  相似文献   

7.
We study the following natural question: Which cryptographic primitives (if any) can be realized by functions with constant input locality, namely functions in which every bit of the input influences only a constant number of bits of the output? This continues the study of cryptography in low complexity classes. It was recently shown by Applebaum et al. (FOCS 2004) that, under standard cryptographic assumptions, most cryptographic primitives can be realized by functions with constant output locality, namely ones in which every bit of the output is influenced by a constant number of bits from the input. We (almost) characterize what cryptographic tasks can be performed with constant input locality. On the negative side, we show that primitives which require some form of non-malleability (such as digital signatures, message authentication, or non-malleable encryption) cannot be realized with constant input locality. On the positive side, assuming the intractability of certain problems from the domain of error correcting codes (namely, hardness of decoding a random binary linear code or the security of the McEliece cryptosystem), we obtain new constructions of one-way functions, pseudorandom generators, commitments, and semantically-secure public-key encryption schemes whose input locality is constant. Moreover, these constructions also enjoy constant output locality and thus they give rise to cryptographic hardware that has constant-depth, constant fan-in and constant fan-out. As a byproduct, we obtain a pseudorandom generator whose output and input locality are both optimal (namely, 3).  相似文献   

8.
秦银雪  李海峰  马琳 《信号处理》2013,29(11):1526-1532
本文模拟人类对图案的认知识别机理,提出了一种基于阅读认知模式的特征提取方法,提取基于视觉信息的图案特征,并提出了一种基于基元拓扑关系建模的通用图案识别方法。利用滑动窗来实现对人类认知图案机制的模拟,通过滑动窗的滑动过程完成对图案局部结构特征提取以及空间拓扑关系的构建。在图案识别建模方法中,采用了人工神经网络和隐马尔科夫模型相结合的混合识别模型,利用人工神经网络的强大计算能力完成基元建模,结合隐马尔科夫模型的强大的处理时序数据的优势,实现了图案的整体拓扑结构建模。实验结果验证了本文提出的图案识别方发的有效性和通用性。   相似文献   

9.
High Efficiency Video Coding (HEVC) surpasses its predecessors in encoding efficiency by introducing new coding tools at the cost of an increased encoding time-complexity. The Coding Tree Unit (CTU) is the main building block used in HEVC. In the HEVC standard, frames are divided into CTUs with the predetermined size of up to 64 × 64 pixels. Each CTU is then divided recursively into a number of equally sized square areas, known as Coding Units (CUs). Although this diversity of frame partitioning increases encoding efficiency, it also causes an increase in the time complexity due to the increased number of ways to find the optimal partitioning. To address this complexity, numerous algorithms have been proposed to eliminate unnecessary searches during partitioning CTUs by exploiting the correlation in the video. In this paper, existing CTU depth decision algorithms for HEVC are surveyed. These algorithms are categorized into two groups, namely statistics and machine learning approaches. Statistics approaches are further subdivided into neighboring and inherent approaches. Neighboring approaches exploit the similarity between adjacent CTUs to limit the depth range of the current CTU, while inherent approaches use only the available information within the current CTU. Machine learning approaches try to extract and exploit similarities implicitly. Traditional methods like support vector machines or random forests use manually selected features, while recently proposed deep learning methods extract features during training. Finally, this paper discusses extending these methods to more recent video coding formats such as Versatile Video Coding (VVC) and AOMedia Video 1(AV1).  相似文献   

10.
防空指挥决策支持系统的目标是为指挥者做决策提供技术支持,而信息技术又为决策支持系统的发展提供了良好的技术基础,特别是将GIS技术应用于决策支持系统中,为指挥者提供了更为直观的可视化界面。本文针对防空指挥决策支持系统的特点,提出了系统设计的原则,给出了地理信息技术在系统中的应用,重点介绍了电子地图的绘制、电子地图的集成和动态图层的建立。  相似文献   

11.
Typically, k-means clustering or sparse coding is used for codebook generation in the bag-of-visual words (BoW) model. Local features are then encoded by calculating their similarities with visual words. However, some useful information is lost during this process. To make use of this information, in this paper, we propose a novel image representation method by going one step beyond visual word ambiguity and consider the governing regions of visual words. For each visual application, the weights of local features are determined by the corresponding visual application classifiers. Each weighted local feature is then encoded not only by considering its similarities with visual words, but also by visual words’ governing regions. Besides, locality constraint is also imposed for efficient encoding. A weighted feature sign search algorithm is proposed to solve the problem. We conduct image classification experiments on several public datasets to demonstrate the effectiveness of the proposed method.  相似文献   

12.
One of the most promising applications of data mining is in biomedical data used in patient diagnosis. Any method of data analysis intended to support the clinical decision-making process should meet several criteria: it should capture clinically relevant features, be computationally feasible, and provide easily interpretable results. In an initial study, we examined the feasibility of using Zernike polynomials to represent biomedical instrument data in conjunction with a decision tree classifier to distinguish between the diseased and non-diseased eyes. Here, we provide a comprehensive follow-up to that work, examining a second representation, pseudo-Zernike polynomials, to determine whether they provide any increase in classification accuracy. We compare the fidelity of both methods using residual root-mean-square (rms) error and evaluate accuracy using several classifiers: neural networks, C4.5 decision trees, Voting Feature Intervals, and Na?ve Bayes. We also examine the effect of several meta-learning strategies: boosting, bagging, and Random Forests (RFs). We present results comparing accuracy as it relates to dataset and transformation resolution over a larger, more challenging, multi-class dataset. They show that classification accuracy is similar for both data transformations, but differs by classifier. We find that the Zernike polynomials provide better feature representation than the pseudo-Zernikes and that the decision trees yield the best balance of classification accuracy and interpretability.  相似文献   

13.
为减少HEVC屏幕内容编码的编码时间,提高编码 效率,本文提出了一种基于决策树的HEVC屏幕内容帧内编码快速 CU划分和简单PU模式选择的算法。对视频序列特性分析,提取有效的特征值,生成决策树模 型。使用方差、梯度信息熵和 像素种类数用于生成CU划分决策树,使用平均非零梯度、像素信息熵等用于生成PU模式分类 决策树。在一定深度的决策 树模型中,通过对相应深度的CU的特征值的计算快速决策当前CU的划分与PU模式的类型。这 种利用决策树做判决的算法 通过减少CU深度和PU的模式遍历而降低编码复杂度,达到快速帧内编码的效果。实验结果表 明,与HEVC屏幕内容的标 准算法相比,该算法在峰值信噪比(PSNR)平均下降0. 05 dB和码率 平均增加1.15%的情况下,能平均减少30.81% 的编码时间。  相似文献   

14.
This paper applies the combined use of qualitative Markov trees and belief functions (otherwise known as Dempster-Shafer theory of evidence), to pavement management decision-making. The basic concepts of the belief function approach-basic probability assignments, belief functions and plausibility functions-are discussed. This paper also discusses the construction of the qualitative Markov tree (join tree). The combined use of the two methods provides a framework for dealing with uncertainty, incomplete data, and imprecise information in the presence of multiple evidences on decision variables. The approach is very appropriate, since it presents more improved methodology and analysis than traditional probability methods applied in pavement management decision-making. Traditional probability theory as a mathematical framework for conceptualizing uncertainty, incomplete data and imprecise information has several shortcomings that have been augmented by several alternative theories. An example is presented to illustrate the construction of qualitative Markov trees, from the evidential network and the solution algorithm. The purpose of the paper is to demonstrate how the evidential network and the qualitative Markov tree can be constructed, and how the propagation of m-values can be handled in the network.  相似文献   

15.
杨欧 《电子科技》2016,29(3):83
文中对多传感器航迹融合技术的基本原理和信息流程等进行了介绍,并对航迹融合技术在指挥自动化系统中的应用做了进一步分析与说明。根据多传感器航迹融合后的锯齿状特点,提出了通过滤波技术进行航迹处理的方法,实现了融合后航迹的平滑。多传感器航迹融合技术为指挥自动化系统中的指挥决策提供了丰富的情报信息,提高了决策效率。  相似文献   

16.
针对热成像和视觉图像人脸识别问题,提出了一种基于词汇树融合尺度不变特征变换方法。首先,对视觉和热成像图像分别单独进行提取,利用Viola-Jones层叠检测器从自然图像中检测出人脸;然后,利用SIFT描述符从尺度空间提取稳定特征;最后,使用词汇树进行分类,利用评分融合和决策融合算法提高系统的精确性和安全性。在拍摄的41个人的脸部图像上的实验表明了该方法的有效性,识别率可接近100%,相比其他几种较为新颖的人脸识别方法,该方法取得了更高的识别精度,并且在一定程度上降低了计算耗时。  相似文献   

17.
为了应对手工视觉特征与哈希编码过程不能最佳地兼容以及现有哈希方法无法区分图像语义信息的问题,提出一种基于深度卷积神经网络学习二进制哈希编码的方法.该方法基本思想是在深度残差网络中增加一个哈希层,同时学习图像特征和哈希函数;以此同时提出一种更加紧凑的分级哈希结构,用来提取更加接近图像语义的特征.经MNIST、CIFAR-10、NUS-WIDE数据集的实验,结果表明该方法优于现有的哈希方法.该方法不仅统一了特征学习和哈希编码的过程,同时深层残差网络也能得到更接近图像语义的特征,进而提高了检索准确度.  相似文献   

18.
In this paper, we present a deep neural network model to enhance the intrusion detection performance. A deep learning architecture combining convolution neural network and long short‐term memory learns spatial‐temporal features of network flows automatically. Flow features are extracted from raw network traffic captures, flows are grouped, and the consecutive N flow records are transformed into a two‐dimensional array like an image. These constructed two‐dimensional feature vectors are normalized and forwarded to the deep learning model. Transformation of flow information assures deep learning in a computationally efficient manner. Overall, convolution neural network learns spatial features, and long short‐term memory learns temporal features from a sequence of network raw data packets. To maximize the detection performance of the deep neural network and to reach at the highest statistical metric values, we apply the tree‐structured Parzen estimator seeking the optimum parameters in the parameter hyper‐plane. Furthermore, we investigate the impact of flow status interval, flow window size, convolution filter size, and long short‐term memory units to the detection performance in terms of level in statistical metric values. The presented flow‐based intrusion method outperforms other publicly available methods, and it detects abnormal traffic with 99.09% accuracy and 0.0227 false alarm rate.  相似文献   

19.
We present in this paper a decision tree with a reject option at each node we call a ternary decision tree; principle of its achievement is defined and a new classification rule of extension of the classical k nearest neighbor one is proposed. This method has been applied for the monitoring of the heart of a fast breeder reactor by using the neutronic signal.  相似文献   

20.
One of the key limitations of the many existing visual tracking method is that they are built upon low-level visual features and have limited predictability power of data semantics. To effectively fill the semantic gap of visual data in visual tracking with little supervision, we propose a tracking method which constructs a robust object appearance model via learning and transferring mid-level image representations using a deep network, i.e., Network in Network (NIN). First, we design a simple yet effective method to transfer the mid-level features learned from NIN on the source tasks with large scale training data to the tracking tasks with limited training data. Then, to address the drifting problem, we simultaneously utilize the samples collected in the initial and most previous frames. Finally, a heuristic schema is used to judge whether updating the object appearance model or not. Extensive experiments show the robustness of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号