首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
一种基于区域特征动态加权的自适应图像融合方法   总被引:3,自引:1,他引:3  
提出了一种基于区域特征动态加权的自适应遥感图像融合方法。该方法首先由输入图像得到其高频、低频图像;进而分别选取一组区域特征进行自适应动态加权,得到相应的高频和低频融合结果;最后,基于一致性准则由高低频融合结果得到最终的融合结果,通过对SAR和TM图像的融合实验,验证了该方法在信息量,边界保持及融合结果一致性方面具有显著优势。  相似文献   

2.
基于小波变换的区域特征加权自适应图像融合算法   总被引:3,自引:0,他引:3  
提出了一种基于小波变换的区域特征加权自适应图像融合算法。首先对源图像做小波变换,分解成低频和高频图像,分解后的低频图像选取一组区域特征进行自适应动态加权,高低频图像采用区域能量融合规则,再对得到的低频和高频图像进行小波反变换。实验结果表明该方法能得到较好的融合效果。  相似文献   

3.
刘丛  唐坚刚  张丽红 《计算机应用》2010,30(7):1867-1869
单一特征检索图像和手工设置多维加权系数特征检索图像越来越不能满足基于内容图像检索精度的需要,为此提出一种基于训练样本集聚类的多维特征向量加权算法。该算法需要手工建立训练样本集,提取出每个图像的颜色、纹理和形状等多维特征,使用遗传算法寻找特征向量集的最优加权系数序列,最后使用该加权序列计算测试集的特征值进行图像检索。实验证明,该算法相对于单一特征检索和手工设置多维特征加权在检索的准确度上有一定的提高,并且在相似度比较高的两个聚类检索时,有很高的准确性。  相似文献   

4.
This paper reports a comparative study of two machine learning methods on Arabic text categorization. Based on a collection of news articles as a training set, and another set of news articles as a testing set, we evaluated K nearest neighbor (KNN) algorithm, and support vector machines (SVM) algorithm. We used the full word features and considered the tf.idf as the weighting method for feature selection, and CHI statistics as a ranking metric. Experiments showed that both methods were of superior performance on the test corpus while SVM showed a better micro average F1 and prediction time.  相似文献   

5.
针对聋哑人哑语手势自动识别问题的复杂性,研究了手势几何特征的多样性及提取和识别方法,提出了一种基于几何特征的手势识别算法.首先,对手势图像进行肤色分割、边缘检测以及逻辑运算,然后,计算其质心面积等多项几何特征,通过实验方法测定最佳特征权值,最后,将其与样本图像特征值进行匹配,最佳匹配即为检测结果.根据30个字母手势创建了3套手势库,其中1套作为样本集,2套作为测试集.实验结果表明,通过该方法进行特征提取来识别汉语字母手势,可有效提高识别率,测试集识别率达到93.33%.  相似文献   

6.
林梦雷  刘景华  王晨曦  林耀进 《计算机科学》2017,44(10):289-295, 317
在多标记学习中,特征选择是解决多标记数据高维性的有效手段。每个标记对样本的可分性程度不同,这可能会为多标记学习提供一定的信息。基于这一假设,提出了一种基于标记权重的多标记特征选择算法。该算法首先利用样本在整个特征空间的分类间隔对标记进行加权,然后将特征在整个标记集合下对样本的可区分性作为特征权重,以此衡量特征对标记集合的重要性。最后,根据特征权重对特征进行降序排列,从而得到一组新的特征排序。在6个多标记数据集和4个评价指标上的实验结果表明,所提算法优于一些当前流行的多标记特征选择算法。  相似文献   

7.
We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier's high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org.  相似文献   

8.
An automatic keyphrase extraction system for scientific documents   总被引:1,自引:0,他引:1  
Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.  相似文献   

9.
The human perception of rotational hand–arm vibration has been investigated by means of a test rig consisting of a rigid frame, an electrodynamic shaker unit, a rigid steering wheel, a shaft assembly, bearings and an automobile seat. Fifteen subjects were tested while seated in a driving posture. Four equal sensation tests and one annoyance threshold test were performed using sinusoidal excitation at 18 frequencies in the range from 3 to 315 Hz. In order to guarantee the generality of the equal sensation data, the four tests were defined to permit checks of the possible influence of three factors: reference signal amplitude, psychophysical test procedure and temporary threshold shift caused by the test exposure. All equal sensation tests used a reference sinusoid of 63 Hz at either 1.0 or 1.5 m/s2 r.m.s. in amplitude. The four equal sensation curves were similar in shape and suggested a decrease in human sensitivity to hand–arm rotational vibration with increasing frequency. The slopes of the equal sensation curves changed at transition points of approximately 6.3 and 63 Hz. A frequency weighting, called Ws, was developed for the purpose of evaluating steering wheel rotational vibration. The proposed Ws has a slope of 0 dB per octave over the frequency range from 3 to 6.3 Hz, a slope of −6 dB per octave from 6.3 to 50 Hz, a slope of 0 dB per octave from 50 to 160 Hz and a slope of −10 dB per octave from 160 to 315 Hz. Ws provides a possible alternative to the existing Wh frequency weighting defined in International Standards Organisation 5349-1 (2001) and British Standards Institution BS 6842 (1987).

Relevance to industry

For the manufacturers of tyres, steering systems and other vehicular components the proposed Ws frequency weighting provides a more accurate representation of human perception of steering wheel rotational vibration than the Wh weighting of ISO 5349-1 and BS6842.  相似文献   


10.
为了提高视频序列中目标跟踪的准确性,提出了结合低维Haar-like特征和在线加权多示例学习(OWMIL)的跟踪算法。将训练集中的图像进行剪裁,构建正负样本集。通过稀疏编码提取低维度的Haar-like特征来表示目标。通过这些正负样本的局部稀疏特征在线学习生成弱分类器集,并通过示例加权方法来促进学习过程,最终生成一个强分类器,用于测试视频中的目标跟踪。实验结果表明,该算法在旋转、光照和尺度变化等影响下取得了优异的效果。相比其他几种改进型多示例学习算法,提出的算法获得了更好的跟踪效果。  相似文献   

11.
在接入主用户授权频段之前,认知用户需要检测该频段是否处于空闲状态,以免干扰主用户通信.利用主用户信号和噪声的不同谱相关特性,研究了基于循环谱的频谱检测方法.将主用户非零循环频率上的接收信号循环谱幅度作为检测统计量,给出了判决准则和检测方法.利用主用户信号在不同循环频率下不同程度的循环平稳特征,多个循环频率之间通过加权迭代合作来提高检测结果的可信度;并通过蒙特卡罗仿真方法验证该方法的可行性.仿真结果表明:加权迭代合作可有效实现频谱检测,且检测性能优于等权合作检测;通过合理选择信号采样点数、循环频率个数、迭代次数进行合作检测既可有效提高检测概率,又能保证检测灵敏度.  相似文献   

12.
冯林  袁彬  孙焘  滕弘飞 《计算机工程》2006,32(18):208-210
为了提高图像检索的效率,近年来相关反馈机制被引入到基于内容的图像检索领域,而在基于内容的图像检索系统中,多特征融合检索中的特征加权又是一个重要的问题。该文提出了一种新的基于特征加权的相关反馈方法,在粗集理论的基础上,结合用户标记的反馈图像建立决策表,通过决策规则的精度来对多个特征加权,使图像检索和人的感知更加接近。实验表明该方法是有效的,并较Rui的相关反馈方法在性能上有很大提高。  相似文献   

13.
传统机器学习面临一个难题,即当训练数据与测试数据不再服从相同分布时,由训练集得到的分类器无法对测试集文本准确分类。针对该问题,根据迁移学习原理,在源领域和目标领域的交集特征中,依据改进的特征分布相似度进行特征加权;在非交集特征中,引入语义近似度和新提出的逆文本类别指数(TF-ICF),对特征在源领域内进行加权计算,充分利用大量已标记的源领域数据和少量已标记的目标领域数据获得所需特征,以便快速构建分类器。在文本数据集20Newsgroups和非文本数据集UCI中的实验结果表明,基于分布和逆文本类别指数的特征迁移加权算法能够在保证精度的前提下对特征快速迁移并加权。  相似文献   

14.
特征权重计算是文本分类过程的基础,传统基于概率的特征权重算法,往往只对词频,逆文档频和逆类频等进行统计,忽略了类别之间的相互关系。而对于多分类问题,类别之间的关系对统计又有重要意义。因此,针对这一不足,本文提出了基于类别方差的特征权重算法,通过计算类别文档频率的方差来度量类别之间的联系,并在搜狗新闻数据集上对五种特征权重算法进行分类实验。结果表明,与其他四种特征权重算法相比,本文提出的算法在F1宏平均和F1微平均上都有较大的提高,提升了文本分类的效果。  相似文献   

15.
基于信息增益的文本特征权重改进算法   总被引:2,自引:0,他引:2       下载免费PDF全文
传统tf.idf算法中的idf函数只能从宏观上评价特征区分不同文档的能力,无法反映特征在训练集各文档以及各类别中分布比例上的差异对特征权重计算结果的影响,降低文本表示的准确性。针对以上问题,提出一种改进的特征权重计算方法tf.igt.igC。该方法从考察特征分布入手,通过引入信息论中信息增益的概念,实现对上述特征分布具体维度的综合考虑,克服传统公式存在的不足。实验结果表明,与tf.idf.ig和tf.idf.igc 2种特征权重计算方法相比,tf.igt.igC在计算特征权重时更加有效。  相似文献   

16.
针对现有的WPCA方法强调信息不足和提取特征维数过高问题,提出了一种改进的加权主成分分析和粗糙集相结合的方法。该算法利用加权主成分分析的原理,将特征加权和主成分分析相结合,构造了一个新的双向三中心高斯分布函数作为加权函数对图像各维特征进行加权,从而得到特征向量,再使用改进的粗糙集属性约简算法对得到的特征向量进行筛选,去除冗余信息。实验结果显示,方法是有效的。  相似文献   

17.
面对海量数据的管理和分析,文本自动分类技术必不可少。特征权重计算是文本分类过程的基础,一个好的特征权重算法能够明显提升文本分类的性能。本文对比了多种不同的特征权重算法,并针对前人算法的不足,提出了基于文档类密度的特征权重算法(tf-idcd)。该算法不仅包括传统的词频度量,还提出了一个新的概念,文档类密度,它通过计算类内包含特征的文档数和类内总文档数的比值来度量。最后,本文在两个中文常见数据集上对五种算法进行实验对比。实验结果显示,本文提出的算法相比较其他特征权重算法在F1宏平均和F1微平均上都有较大的提升。  相似文献   

18.
由于朴素贝叶斯算法的特征独立性假设以及传统TFIDF加权算法仅仅考虑了特征在整个训练集的分布情况,忽略了特征与类别和文档之间关系,造成传统方法赋予特征的权重并不能代表其准确性.针对以上问题,提出了二维信息增益加权的朴素贝叶斯分类算法,进一步考虑到了特征的二维信息增益即特征类别信息增益和特征文档信息增益对分类效果的影响,并设计实验与传统的加权朴素贝叶斯算法相比,该算法在查准率、召回率、F1值指标性能上能提升6%左右.  相似文献   

19.
On the estimation of transfer functions   总被引:1,自引:0,他引:1  
Lennart Ljung 《Automatica》1985,21(6):677-696
This paper treats the close conceptual relationships between basic approaches to the estimation of transfer functions of linear systems. The classical methods of frequency and spectral analysis are shown to be related to the well-known time domain methods of prediction error type via a common “empirical transfer function estimate”. Asymptotic properties of the estimates obtained by the respective methods are also described and discussed. An important feature that is displayed by this treatment is a frequency domain weighting function that determines the distribution of bias in case the true system cannot be exactly described within the chosen model set. The choice of this weighting function is made in terms of noise models for time-domain methods. The noise model thus has a dual character from the system approximation point of view.  相似文献   

20.
针对并行深度森林算法在处理大数据问题时存在的冗余与不相关特征过多,多粒度扫描不平衡以及并行化效率低等问题,提出了大数据环境下基于信息论改进的并行深度森林算法——IPDFIT(improved parallel deep forest based on information theory).该算法基于信息论设计了一种混...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号