首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
与三维欧氏空间一样,信号时域和频域都是希尔伯特空间。采用类比方法,将信号看作矢量,信号基函数看作基矢量,能建立时域空间和频域空间的几何直观性。可通过三维欧氏空间性质来直观理解时域空间和频域空间,而傅里叶系数本质上是信号在其基函数上的正交投影分量。信号的时域空间和频域空间是环同构,同构映射是傅里叶变换。根据希尔伯特空间性质和环同构,还可得出傅里叶变换的一些性质。  相似文献   

2.
We present a novel technique for encoding and decoding constant weight binary vectors that uses a geometric interpretation of the codebook. Our technique is based on embedding the codebook in a Euclidean space of dimension equal to the weight of the code. The encoder and decoder mappings are then interpreted as a bijection between a certain hyper-rectangle and a polytope in this Euclidean space. An inductive dissection algorithm is developed for constructing such a bijection. We prove that the algorithm is correct and then analyze its complexity. The complexity depends on the weight of the vector, rather than on the block length as in other algorithms. This approach is advantageous when the weight is smaller than the square root of the block length.   相似文献   

3.
Addressed here is the problem of constructing and analyzing expression-invariant representations of human faces. We demonstrate and justify experimentally a simple geometric model that allows to describe facial expressions as isometric deformations of the facial surface. The main step in the construction of expression-invariant representation of a face involves embedding of the facial intrinsic geometric structure into some low-dimensional space. We study the influence of the embedding space geometry and dimensionality choice on the representation accuracy and argue that compared to its Euclidean counterpart, spherical embedding leads to notably smaller metric distortions. We experimentally support our claim showing that a smaller embedding error leads to better recognition.  相似文献   

4.
基于多线性核主成分分析的掌纹识别   总被引:5,自引:4,他引:1  
提出运用多线性核主成分分析(MKPCA)的一种新方法进行掌纹识别.首先MKPCA通过非线性变换,将输入样本图像向高维特征空间F上投影,运用多线性主成分分析(MPCA)直接对掌纹张量进行降维,得到低维的投影张量;然后掌纹图像向张量子空间上投影提取特征向量;最后计算特征向量间的余弦距离进行掌纹匹配.运用PolyU掌纹图像库...  相似文献   

5.
We present a geometry-based indexing approach for the retrieval of video databases. It consists of two modules: 3D object shape inferencing from video data and geometric modeling from the reconstructed shape structure. A motion-based segmentation algorithm employing feature block tracking and principal component split is used for multi-moving-object motion classification and segmentation. After segmentation, feature blocks from each individual object are used to reconstruct its motion and structure through a factorization method. The estimated shape structure and motion parameters are used to generate the implicit polynomial model for the object. The video data is retrieved using the geometric structure of objects and their spatial relationship. We generalize the 2D string to 3D to compactly encode the spatial relationship of objects.  相似文献   

6.
对于采用高斯混合模型(GMM)的与文本无关的说话人识别,出于模型参数数量和计算量的考虑 GMM的协方差矩阵通常取为对角矩阵形式,并假设观察矢量各维之间是不相关的。然而,这种假设在大多情况下是不成立的。为了使观察矢量空间适合于采用对角协方差的GMM进行拟合,通常采用对参数空间或模型空间进行解相关变换。该文提出了一种改进模型空间解相关的PCA方法,通过直接对GMM的各高斯成分的协方差进行主成分分析,使参数空间分布更符合使用对角化协方差的混合高斯分布,并通过共享PCA变换阵的方法减少参数数量和计算量。在微软语音库上的说话人识别实验表明,该方法取得了比常规的对角协方差GMM系统的最优结果有相对35%的误识率下降。  相似文献   

7.
Robust loop-closure detection is essential for visual SLAM. Traditional methods often focus on the geometric and visual features in most scenes but ignore the semantic information provided by objects. Based on this consideration, we present a strategy that models the visual scene as semantic sub-graph by only preserving the semantic and geometric information from object detection. To align two sub-graphs efficiently, we use a sparse Kuhn–Munkres algorithm to speed up the search for correspondence among nodes. The shape similarity and the Euclidean distance between objects in the 3-D space are leveraged unitedly to measure the image similarity through graph matching. Furthermore, the proposed approach has been analyzed and compared with the state-of-the-art algorithms at several datasets as well as two indoor real scenes, where the results indicate that our semantic graph-based representation without extracting visual features is feasible for loop-closure detection at potential and competitive precision.  相似文献   

8.
The basic idea behind the energy transfer features is that the appearance of objects can be described using a function of energy distribution in images. Inside the image, the energy sources are placed and the energy is transferred from the sources during a certain chosen time. The values of energy distribution function have to be reduced into a reasonable number of values. The process of reducing can be simply solved by sampling. The input image is divided into regular cells. The mean value is calculated inside each cell. The values of samples are then considered as a vector that is used as an input for the SVM classifier. We propose an improvement to this process. The discrete cosine transform coefficients are calculated inside the cells (instead of the mean values) to construct the feature vector for the face and pedestrian detectors. To reduce the number of coefficients, we use the patterns in which the coefficients are grouped into regions. In the face detector, the principal component analysis is also used to create the feature vector with a relatively small dimension. The results show that, using this approach, the objects can be efficiently encoded with a relatively short vector with the results that outperform the results of the state-of-the-art detectors.  相似文献   

9.
10.
谢晓丹  李伯虎  柴旭东 《电子学报》2017,45(6):1362-1366
针对核主成分分析算法广泛面临的训练样本数量大而带来的计算和存储空间的问题,提出了基于1类支持向量理论的稀疏核主成分分析算法,该方法适合于计算和存储空间受限下的应用场合,如小型硬件平台下的图像检索系统、医学辅助诊断系统等.通过求解最优方程找到能够代表原始样本空间的少量典型样本,这些样本作为计算核数据矩阵,大大节省了核矩阵计算的时间和存储空间成本,在有限的训练样本集上最大限度在硬件平台下图像处理领域有效提高识别率和计算效率.  相似文献   

11.
For pre- and post-earthquake remote-sensing images, registration is a challenging task due to the possible deformations of the objects to be registered. To overcome this problem, a registration method based on robust weighted kernel principal component analysis is proposed to precisely register the variform objects. Firstly, a robust weighted kernel principal component analysis (RWKPCA) method is developed to capture the common robust kernel principal components (RKPCs) of the variform objects. Secondly, a registration approach is derived from the projection on RKPCs. Finally, two experiments are conducted on the SAR image registration in Wenchuan earthquake on May 12, 2008, and the results showed that the method is very effective in capturing structure patterns and generalized well for registration.  相似文献   

12.
基于主成分分析的去噪算法在进行局部像素分组时,由于噪声具有不确定性和随机性,以欧氏距离 直接作为图像块相似性这一判断标准容易使得结果产生偏差。针对此问题,文中提出了一种基于向量相似度的 LPG-PCA 图像去噪算法,将向量相似度和欧氏距离相结合作为相似图像块的判断标准,优化了相似图像块的选取。 此外,在相似图像块样本数的选取方面采用自适应的数量选取方法,使得样本数的选取更加合理,进一步提高了图 像的去噪质量。实验结果表明所提算法在峰值信噪比和结构相似性方面均优于传统的LPG-PCA 图像去噪算法,且 对亚毫米波成像也具有一定的去噪效果。  相似文献   

13.
14.
We show that for a special class of probability distributions that we call contoured distributions, information-theoretic invariants and inequalities are equivalent to geometric invariants and inequalities of bodies in Euclidean space associated with the distributions. Using this, we obtain characterizations of contoured distributions with extremal Shannon and Renyi entropy. We also obtain a new reverse information-theoretic inequality for contoured distributions.  相似文献   

15.
主成分分析(Principal Component Analysis,PCA)法用于高速视觉的激光麦克风的音频信号重建,可从声场中轻质弹性物体表面的激光散斑动态变化中提取语音信息。将高速散斑视频中的一帧图像视为高维空间中的向量,顺序将视频图像堆栈成数据矩阵,利用PCA做特征提取,语音信息就存在于方差较大的主成分中,通常应用第一主成分就可以重建清晰的语音信号。实验表明,PCA对激光散斑颗粒尺度和灰度分布没有过多限制,即使在采样区域较小、反射物体材质不同的情况下,都可以重建人耳可分辨的语音信号。而且基于PCA的无监督机器学习算法特性,选取视频开始部分的帧图像做训练集,还可以提取含有音频信息的主成分的特征向量,作为后续视频图像向量的投影基,实现语音信号的快速提取。  相似文献   

16.
Mathematical morphology is well suited to capturing geometric information. Hence, morphology-based approaches have been popular for object shape representation. The two primary morphology-based approaches-the morphological skeleton and the morphological shape decomposition (MSD)-each represent an object as a collection of disjoint sets. A practical shape representation scheme, though, should give a representation that is computationally efficient to use. Unfortunately, little work has been done for the morphological skeleton and the MSD to address efficiency. We propose a flexible search-based shape representation scheme that typically gives more efficient representations than the morphological skeleton and MSD. Our method decomposes an object into a number of simple components based on homothetics of a set of structuring elements. To form the representation, the components are combined using set union and set difference operations. We use three constituent component types and a thorough cost-based search strategy to find efficient representations. We also consider allowing object representation error, which may yield even more efficient representations.  相似文献   

17.
18.
Methods of computational anatomy are typically based on a spatial transformation that maps a template to an individual anatomy and vice versa. However, important morphological characteristics are frequently not captured by this transformation, thereby leading to lossy representations. We extend this formulation by incorporating residual anatomical information, i.e., information that is not captured by the shape transformation but is necessary in order to fully and exactly reconstruct the anatomy under measurement. We, therefore, arrive at a lossless morphological representation. By virtue of being lossless, this representation allows us to represent the same anatomy by an infinite number of pairs [transformation, residual], since different residuals correspond to different transformations. We treat these pairs as members of an anatomical equivalence class (AEC), which we approximate using principal component analysis. We show that projection onto the orthogonal to the AEC subspace produces measurements that allow us to better detect morphological abnormalities by eliminating variation in the data that is irrelevant and confounds underlying subtle morphological characteristics. Finally, we show that higher classification rates between a group of normal brains and a group of brains with localized atrophy are obtained if we use nonmetric distances between AECs instead of conventional Euclidean distances between individual morphological measurements. The results confirm that this representation can improve the results compared to conventional analysis, but also highlight limitations of the current approach and point to directions of further development of this general morphological analysis framework.  相似文献   

19.
Two stage principal component analysis of color   总被引:1,自引:0,他引:1  
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号