首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
L1-norm-based common spatial patterns   总被引:1,自引:0,他引:1  
Common spatial patterns (CSP) is a commonly used method of spatial filtering for multichannel electroencephalogram (EEG) signals. The formulation of the CSP criterion is based on variance using L2-norm, which implies that CSP is sensitive to outliers. In this paper, we propose a robust version of CSP, called CSP-L1, by maximizing the ratio of filtered dispersion of one class to the other class, both of which are formulated by using L1-norm rather than L2-norm. The spatial filters of CSP-L1 are obtained by introducing an iterative algorithm, which is easy to implement and is theoretically justified. CSP-L1 is robust to outliers. Experiment results on a toy example and datasets of BCI competitions demonstrate the efficacy of the proposed method.  相似文献   

2.
In the context of electroencephalogram (EEG)-based brain-computer interfaces (BCI), common spatial patterns (CSP) is widely used for spatially filtering multichannel EEG signals. CSP is a supervised learning technique depending on only labeled trials. Its generalization performance deteriorates due to overfitting occurred when the number of training trials is small. On the other hand, a large number of unlabeled trials are relatively easy to obtain. In this paper, we contribute a comprehensive learning scheme of CSP (cCSP) that learns on both labeled and unlabeled trials. cCSP regularizes the objective function of CSP by preserving the temporal relationship among samples of unlabeled trials in terms of linear representation. The intrinsically temporal structure is characterized by an l(1) graph. As a result, the temporal correlation information of unlabeled trials is incorporated into CSP, yielding enhanced generalization capacity. Interestingly, the regularizer of cCSP can be interpreted as minimizing a nontask related EEG component, which helps cCSP alleviate nonstationarities. Experiment results of single-trial EEG classification on publicly available EEG datasets confirm the effectiveness of the proposed method.  相似文献   

3.
为了能够较准确地表示图像的形状特征,提出了一种融合边界和区域信息的新的形状描述子。首先对图像进行二维离散余弦变换,获得低频系数作为区域特征。之后提取图像的轮廓并进行采样,形成描述形状轮廓的有序点列表,对每个采样点分别顺时针和逆时针等距离跟踪获得2个邻点,计算拱高以及质心距离。然后获取由拱高和质心距离组成的复函数的频域描述子,并组合区域特征与轮廓特征。对 MPEG-7标准图形库的检索实验显示,该描述子的检索性能显著优于三角形面积函数、质心距离函数与拱高半径复函数等同类描述子。  相似文献   

4.
针对图像轮廓特征信息提取的特点,提出基因编码算法。首先把像素映射为基因个体表达式,用基因编码值区分为图像轮廓特征,接着对像素的每个基因所编码属性筛选进行特征提取,最后给出了图像轮廓特征信息提取的算法过程。实验仿真得出基因编码提取的图像轮廓边缘很清晰,抗噪声能力强,时效性快,边缘点保持指数最接近1。  相似文献   

5.
空间一致性邻域保留嵌入的高光谱数据特征提取   总被引:5,自引:5,他引:5       下载免费PDF全文
局部线性嵌入(LLE)和邻域保留嵌入(NPE)等流形学习方法可以提取高光谱数据的主要结构特征,有助于对数据的理解和进一步处理。但是,这些方法忽视了高光谱图像中相邻像素之间的相关性。针对这个问题,提出一种基于空间一致性思想的邻域保留嵌入(SC-NPE)特征提取算法,通过一个优化的局部线性嵌入,并考虑相邻像素的相关特性,在高维空间建立数据的局部邻域结构。然后寻找一个优化的变换矩阵,将局部邻域结构投影到低维空间,实现数据的特征提取。与LLE和NPE算法相比,SC-NPE既考虑高光谱数据的流形结构,又考虑了其图像域空间信息,可以更好地应用在高光谱数据的特征提取过程中。实验结果表明,SC-NPE特征提取算法在高光谱图像分类方面的性能明显优于其他同类算法。  相似文献   

6.
传统的城镇景观空间分布特征提取系统的可靠性差,为此设计地貌影响下城镇景观空间分布特征提取系统。在系统硬件的芯片设计中,对芯片中集成的处理器硬核与SoC组件进行连接设计,对高清晰度多媒体接口与连接器的管脚进行了分配,完成了系统的硬件设计。在此基础上,在系统的软件设计中安装交叉编译工具链,将内核文件进行编译,引入Sobel算法,完成地貌影响下城镇景观空间分布特征提取系统的设计。为了检测系统性能,设计对比实验,设计的地貌影响下城镇景观空间分布特征提取系统得到的空间分布特征直方图与原空间分布特征直方图的相似度为99.8%,比传统特征提取系统的相似度高出23.5%,验证了设计的地貌影响下城镇景观空间分布特征提取系统的可靠性。  相似文献   

7.
8.
The constrained capacity of a coherent coded modulation (CM) digital communication system with data-aided channel estimation and a discrete, equiprobable symbol alphabet is derived under the assumption that the system operates on a flat fading channel and uses an interleaver to combat the bursty nature of the channel. It is shown that linear minimum mean square error channel estimation directly follows from the derivation and links average mutual information to the channel dynamics. Based on the assumption that known training symbols are transmitted, the achievable rate of the system is optimized with respect to the amount of training information needed. Furthermore, the results are compared to the additive white Gaussian noise channel, and the case when ideal channel state information is available at the receiver  相似文献   

9.
One of the most popular feature extraction algorithms for brain-computer interfaces (BCI) is common spatial patterns (CSPs). Despite its known efficiency and widespread use, CSP is also known to be very sensitive to noise and prone to overfitting. To address this issue, it has been recently proposed to regularize CSP. In this paper, we present a simple and unifying theoretical framework to design such a regularized CSP (RCSP). We then present a review of existing RCSP algorithms and describe how to cast them in this framework. We also propose four new RCSP algorithms. Finally, we compare the performances of 11 different RCSP (including the four new ones and the original CSP), on electroencephalography data from 17 subjects, from BCI competition datasets. Results showed that the best RCSP methods can outperform CSP by nearly 10% in median classification accuracy and lead to more neurophysiologically relevant spatial filters. They also enable us to perform efficient subject-to-subject transfer. Overall, the best RCSP algorithms were CSP with Tikhonov regularization and weighted Tikhonov regularization, both proposed in this paper.  相似文献   

10.
Real-time spatial referencing is an important alternative to tracking for designing spatially aware ophthalmic instrumentation for procedures such as laser photocoagulation and perimetry. It requires independent, fast registration of each image frame from a digital video stream (1024 x 1024 pixels) to a spatial map of the retina. Recently, we have introduced a spatial referencing algorithm that works in three primary steps: 1) tracing the retinal vasculature to extract image feature (landmarks); 2) invariant indexing to generate hypothesized landmark correspondences and initial transformations; and 3) alignment and verification steps to robustly estimate a 12-parameter quadratic spatial transformation between the image frame and the map. The goal of this paper is to introduce techniques to minimize the amount of computation for successful spatial referencing. The fundamental driving idea is to make feature extraction subservient to registration and, therefore, only produce the information needed for verified, accurate transformations. To this end, the image is analyzed along one-dimensional, vertical and horizontal grid lines to produce a regular sampling of the vasculature, needed for step 3) and to initiate step 1). Tracing of the vascular is then prioritized hierarchically to quickly extract landmarks and groups (constellations) of landmarks for indexing. Finally, the tracing and spatial referencing computations are integrated so that landmark constellations found by tracing are tested immediately. The resulting implementation is an order-of-magnitude faster with the same success rate. The average total computation time is 31.2 ms per image on a 2.2-GHz Pentium Xeon processor.  相似文献   

11.
Transforming an original image into a high-dimensional (HD) feature has been proven to be effective in classifying images. This paper presents a novel feature extraction method utilizing the HD feature space to improve the discriminative ability for face recognition. We observed that the local binary pattern can be decomposed into bit-planes, each of which has scale-specific directional information of the face image. Each bit-plane not only has the inherent local-structure of the face image but also has an illumination-robust characteristic. By concatenating all the decomposed bit-planes, we generate an HD feature vector with an improved discriminative ability. To reduce the computational complexity while preserving the incorporated local structural information, a supervised dimension reduction method, the orthogonal linear discriminant analysis, is applied to the HD feature vector. Extensive experimental results show that existing classifiers with the proposed feature outperform those with other conventional features under various illumination, pose, and expression variations.  相似文献   

12.
The distributed detection problem is considered from an information-theoretic point of view. An entropy-based cost function is used for system optimization. This cost function maximizes the amount of information transfer between the input and the output. Distributed detection system topologies with and without a fusion center are considered, and an optimal fusion rule and optimal decision rules are derived  相似文献   

13.
通过网络搜索人们可以获得各种形式的人物资料。文中针对学者研究如何从获得的资料中提取特征信息,提出一种基于领域知识的特征信息提取方法。该方法首先调用Google Search API实时采集网页上的人物信息,然后使用网页结构分析、触发词识别、自然语言处理等技术进行人物属性信息提取,最后自动生成并呈现一个标准化的学者简历。依据这一方法,作者设计并实施了一个学者特征信息的提取系统。  相似文献   

14.
在模板(Template Attacks,TA)攻击的研究中,如何利用功耗曲线信息,合理选择有效点,增强匹配效果是改进模板攻击的一个重要方向.文中分析了目前有关功耗曲线主要特征提取方法的优缺点,并提出了一种基于回声状态网络(Echo State Network,ESN)的功耗曲线特征提取方法.该方法针对ESN分类方法中的储备池参数选择问题,以时间预测序列精度为标准,采用网格法进行参数空间的优化搜索,并利用神经网络以数据样本形式作为定量知识自行处理的能力,对粗略对齐下的功耗曲线的特征提取能力进行了测试和评估.实验结果表明,基于ESN功耗曲线特征提取方法在功耗曲线数量相同条件下,通过合理选择内核参数,能够降低模板攻击对功耗曲线预处理技术的依赖,提高正确密钥的分类精度.  相似文献   

15.
Unsupervised image-set clustering using an information theoretic framework.   总被引:3,自引:0,他引:3  
In this paper, we combine discrete and continuous image models with information-theoretic-based criteria for unsupervised hierarchical image-set clustering. The continuous image modeling is based on mixture of Gaussian densities. The unsupervised image-set clustering is based on a generalized version of a recently introduced information-theoretic principle, the information bottleneck principle. Images are clustered such that the mutual information between the clusters and the image content is maximally preserved. Experimental results demonstrate the performance of the proposed framework for image clustering on a large image set. Information theoretic tools are used to evaluate cluster quality. Particular emphasis is placed on the application of the clustering for efficient image search and retrieval.  相似文献   

16.
In recent years, diffusion tensor imaging (DTI) has become a popular in vivo diagnostic imaging technique in Radiological sciences. In order for this imaging technique to be more effective, proper image analysis techniques suited for analyzing these high dimensional data need to be developed. In this paper, we present a novel definition of tensor "distance" grounded in concepts from information theory and incorporate it in the segmentation of DTI. In a DTI, the symmetric positive definite (SPD) diffusion tensor at each voxel can be interpreted as the covariance matrix of a local Gaussian distribution. Thus, a natural measure of dissimilarity between SPD tensors would be the Kullback-Leibler (KL) divergence or its relative. We propose the square root of the J-divergence (symmetrized KL) between two Gaussian distributions corresponding to the diffusion tensors being compared and this leads to a novel closed form expression for the "distance" as well as the mean value of a DTI. Unlike the traditional Frobenius norm-based tensor distance, our "distance" is affine invariant, a desirable property in segmentation and many other applications. We then incorporate this new tensor "distance" in a region based active contour model for DTI segmentation. Synthetic and real data experiments are shown to depict the performance of the proposed model.  相似文献   

17.
Classification and feature extraction of AVIRIS data   总被引:13,自引:0,他引:13  
The processing of Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) data is discussed both in terms of feature extraction and classification. The recently proposed decision boundary feature extraction method is reviewed and then applied in experiments. Results of classifications for AVIRIS data from Iceland 1991 are given with emphasis on geological applications. The classifiers used include neural network methods and statistical approaches. The decision boundary feature extraction method shows excellent performance for these data  相似文献   

18.
Kemp  Z. 《Multimedia, IEEE》1995,2(4):68-76
Spatial information systems increasingly require incorporation of multimedia data types. Pictures, sounds, animated sequences, and unstructured text may form an integral part of geographically located entities in information systems. An example ecological application with extensive multimedia requirements illustrates how multimedia data can be integrated with spatial information systems in a generic, application-independent manner  相似文献   

19.
An optical orthogonal signature pattern code (OOSPC) is a collection of (0,1) two-dimensional (2-D) patterns with good correlation properties (i.e., high autocorrelation peaks with low sidelobes, and low cross-correlation functions). Such codes find applications, for example, to parallelly transmit and access images in “multicore-fiber” code-division multiple-access (CDMA) networks. Up to now all work on OOSPCs has been based on an assumption that at most one pulse per column or one pulse per row and column is allowed in each two-dimensional pattern. However, this restriction may not be required in such multiple-access networks if timing information can be extracted from other means, rather than from the autocorrelation function. A new class of OOSPCs is constructed without the restriction. The relationships between two-dimensional binary discrete auto- and cross-correlation arrays and their corresponding “sets” for OOSPCs are first developed. In addition, new bounds on the size of this special class of OOSPCs are derived. Afterwards, four algebraic techniques for constructing these new codes are investigated. Among these constructions, some of them achieve the upper bounds with equality and are thus optimal. Finally, the codes generated from some constructions satisfy the restriction of at most one pulse per row or column and hence can be used in applications requiring, for example, frequency-hopping patterns  相似文献   

20.
Wavelet-based feature extraction from oceanographic images   总被引:5,自引:0,他引:5  
Features in satellite images of the oceans often have weak edges. These images also have a significant amount of noise, which is either due to the clouds or atmospheric humidity. The presence of noise compounds the problems associated with the detection of features, as the use of any traditional noise removal technique will also result in the removal of weak edges. Recently, there have been rapid advances in image processing as a result of the development of the mathematical theory of wavelet transforms. This theory led to multifrequency channel decomposition of images, which further led to the evolution of important algorithms for the reconstruction of images at various resolutions from the decompositions. The possibility of analyzing images at various resolutions can be useful not only in the suppression of noise, but also in the detection of fine features and their classification. This paper presents a new computational scheme based on multiresolution decomposition for extracting the features of interest from the oceanographic images by suppressing the noise. The multiresolution analysis from the median presented by Starck-Murtagh-Bijaoui (1994) is used for the noise suppression  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号