首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 328 毫秒
1.
大规模时间序列数据库降维及相似搜索   总被引:4,自引:0,他引:4  
李爱国  覃征 《计算机学报》2005,28(9):1467-1475
提出一种基于分段多项式表示(PPR)的时间序列数据库相似查询的系统化方法.PPR是一类基于线性多项式回归的正交变换.用PPR变换索引时间序列数据在理论上具备非漏报性质.文中分析了PPR的计算复杂性以及查询阈值的下界,并提出了一种衡量时间序列相似查询算法之查询效率的定量指标.与基于离散傅立叶变换(DFT)和离散小波变换(DWT)的时间序列相似查询算法所作的对比实验表明,所提算法可以用低的索引结构维数获得高的查询效率.  相似文献   

2.
刘兆赓  李占山  王丽  王涛  于海鸿 《软件学报》2020,31(5):1511-1524
特征选择作为一种重要的数据预处理方法,不但能解决维数灾难问题,还能提高算法的泛化能力.各种各样的方法已被应用于解决特征选择问题,其中,基于演化计算的特征选择算法近年来获得了更多的关注并取得了一些成功.近期研究结果表明,森林优化特征选择算法具有更好的分类性能及维度缩减能力.然而,初始化阶段的随机性、全局播种阶段的人为参数设定,影响了该算法的准确率和维度缩减能力;同时,算法本身存在着高维数据处理能力不足的本质缺陷.从信息增益率的角度给出了一种初始化策略,在全局播种阶段,借用模拟退火控温函数的思想自动生成参数,并结合维度缩减率给出了适应度函数;同时,针对形成的优质森林采取贪心算法,形成一种特征选择算法EFSFOA(enhanced feature selection using forest optimization algorithm).此外,在面对高维数据的处理时,采用集成特征选择的方案形成了一个适用于EFSFOA的集成特征选择框架,使其能够有效处理高维数据特征选择问题.通过设计对比实验,验证了EFSFOA与FSFOA相比在分类准确率和维度缩减率上均有明显的提高,高维数据处理能力更是提高到了100 000维.将EFSFOA与近年来提出的比较高效的基于演化计算的特征选择方法进行对比,EFSFOA仍具有很强的竞争力.  相似文献   

3.
针对时序数据相似性搜索面临的高维性问题,提出一种利用按沃尔什序数排列的离散沃尔什变换((DWHT)w)对时序数据进行维归约的方法.(DWHT)w是正交变换,变换矩阵简单,可以应用快速算法,对时序数据有更好的特征提取能力,用其索引时间序列数据在理论上具备非漏报性质.与基于离散傅里叶变换和基于离散沃尔什变换的对比实验表明,...  相似文献   

4.
In this paper we introduce a novel supervised manifold learning technique called Supervised Laplacian Eigenmaps (S-LE), which makes use of class label information to guide the procedure of non-linear dimensionality reduction by adopting the large margin concept. The graph Laplacian is split into two components: within-class graph and between-class graph to better characterize the discriminant property of the data. Our approach has two important characteristics: (i) it adaptively estimates the local neighborhood surrounding each sample based on data density and similarity and (ii) the objective function simultaneously maximizes the local margin between heterogeneous samples and pushes the homogeneous samples closer to each other.Our approach has been tested on several challenging face databases and it has been conveniently compared with other linear and non-linear techniques, demonstrating its superiority. Although we have concentrated in this paper on the face recognition problem, the proposed approach could also be applied to other category of objects characterized by large variations in their appearance (such as hand or body pose, for instance).  相似文献   

5.
基于内容的检索是近年来的研究热点之一,现在已有许多基于象素域的图像检索技术,目前数据压缩也已成为多媒体应用的标准模式,静态图像压缩主要采用JPEG技术,研究基于传统JPEG和JPEG2000的图像检索方法成为必然。本文综述近年来出现的基于JPEG核心算法离散余弦变换和JPEG2000核心算法离散小波变换的图像检索技术。  相似文献   

6.
利用正交投影技术进行降维可以更好地保留与度量结构有关的信息, 提高人脸识别性能。在谱回归判别分析(SRDA)和谱回归核判别分析(SRKDA)的基础上, 提出正交SRDA(OSRDA)和正交SRKDA(OSRKDA)降维算法。首先, 给出基于Cholesky分解求解正交鉴别矢量集的方法, 然后, 通过该方法对SRDA和SRKDA投影向量作正交化处理。其简单、容易实现而且克服了迭代计算正交鉴别矢量集的方法不适应于谱回归(SR)降维的缺点。ORL、Yale和PIE库上的实验结果表明了算法的有效性和效率, 在有效降维的同时能进一步提高鉴别能力。  相似文献   

7.
We introduce a new representation for time series, the Multiresolution Vector Quantized (MVQ) approximation, along with a distance function. Similar to Discrete Wavelet Transform, MVQ keeps both local and global information about the data. However, instead of keeping low-level time series values, it maintains high-level feature information (key subsequences), facilitating the introduction of more meaningful similarity measures. The method is fast and scales linearly with the database size and dimensionality. Contrary to previous methods, the vast majority of which use the Euclidean distance, MVQ uses a multiresolution/hierarchical distance function. In our experiments, the proposed technique consistently outperforms the other major methods.  相似文献   

8.
There is a great interest in dimensionality reduction techniques for tackling the problem of high-dimensional pattern classification. This paper addresses the topic of supervised learning of a linear dimension reduction mapping suitable for classification problems. The proposed optimization procedure is based on minimizing an estimation of the nearest neighbor classifier error probability, and it learns a linear projection and a small set of prototypes that support the class boundaries. The learned classifier has the property of being very computationally efficient, making the classification much faster than state-of-the-art classifiers, such as SVMs, while having competitive recognition accuracy. The approach has been assessed through a series of experiments, showing a uniformly good behavior, and competitive compared with some recently proposed supervised dimensionality reduction techniques.  相似文献   

9.
Efficient and robust feature extraction by maximum margin criterion   总被引:15,自引:0,他引:15  
In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem . In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient.  相似文献   

10.
Stable orthogonal local discriminant embedding (SOLDE) is a recently proposed dimensionality reduction method, in which the similarity, diversity and interclass separability of the data samples are well utilized to obtain a set of orthogonal projection vectors. By combining multiple features of data, it outperforms many prevalent dimensionality reduction methods. However, the orthogonal projection vectors are obtained by a step-by-step procedure, which makes it computationally expensive. By generalizing the objective function of the SOLDE to a trace ratio problem, we propose a stable and orthogonal local discriminant embedding using trace ratio criterion (SOLDE-TR) for dimensionality reduction. An iterative procedure is provided to solve the trace ratio problem, due to which the SOLDE-TR method is always faster than the SOLDE. The projection vectors of the SOLDE-TR will always converge to a global solution, and the performances are always better than that of the SOLDE. Experimental results on two public image databases demonstrate the effectiveness and advantages of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号