首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Small sample size and high computational complexity are two major problems encountered when traditional kernel discriminant analysis methods are applied to high-dimensional pattern classification tasks such as face recognition. In this paper, we introduce a new kernel discriminant learning method, which is able to effectively address the two problems by using regularization and subspace decomposition techniques. Experiments performed on real face databases indicate that the proposed method outperforms, in terms of classification accuracy, existing kernel methods, such as kernel principal component analysis and kernel linear discriminant analysis, at a significantly reduced computational cost.  相似文献   

2.
We introduce a new methodology for measuring the degree of similarity between two intuitionistic fuzzy sets. The new method is developed on the basis of a distance defined on an interval by the use of convex combination of endpoints and also focusing on the property of min and max operators. It is shown that among the existing methods, the proposed method meets all the well-known properties of a similarity measure and has no counter-intuitive examples. The validity and applicability of the proposed similarity measure is illustrated with two examples known as pattern recognition and medical diagnosis.  相似文献   

3.
The focal problems of projection include out-of-focus projection images from the projector caused by incomplete mechanical focus and screen-door effects produced by projection pixilation. To eliminate these defects and enhance the imaging quality and clarity of projectors, a novel adaptive projection defocus algorithm is proposed based on multi-scale convolution kernel templates. This algorithm applies the improved Sobel-Tenengrad focus evaluation function to calculate the sharpness degree of intensity equalization and then constructs multi-scale defocus convolution kernels to remap and render the defocus projection image. The resulting projection defocus corrected images can eliminate out-of-focus effects and improve the sharpness of uncorrected images. Experiments show that the algorithm works quickly and robustly and that it not only effectively eliminates visual artifacts and can run on a self-designed smart projection system in real time but also significantly improves the resolution and clarity of the observer's visual perception.  相似文献   

4.
Color quantization is a process to compress image color space while minimizing visual distortion. The quantization based on preclustering has low computational complexity but cannot guarantee quantization precision. The quantization based on postclustering can produce high quality quantization results. However, it has to traverse image pixels iteratively and suffers heavy computational burden. Its computational complexity was not reduced although the revised versions have improved the precision. In the work of color quantization, balancing quantization quality and quantization complexity is always a challenging point. In this paper, a two-stage quantization framework is proposed to achieve this balance. In the first stage, high-resolution color space is initially compressed to a condensed color space by thresholding roughness indices. Instead of linear compression, we propose generic roughness measure to generate the delicate segmentation of image color. In this way, it causes less distortion to the image. In the second stage, the initially compressed colors are further clustered to a palette using Weighted Rough K-means to obtain final quantization results. Our objective is to design a postclustering quantization strategy at the color space level rather than the pixel level. Applying the quantization in the precisely compressed color space, the computational cost is greatly reduced; meanwhile, the quantization quality is maintained. The substantial experimental results validate the high efficiency of the proposed quantization method, which produces high quality color quantization while possessing low computational complexity.  相似文献   

5.
A reformative kernel algorithm, which can deal with two-class problems as well as those with more than two classes, on Fisher discriminant analysis is proposed. In the novel algorithm the supposition that in feature space discriminant vector can be approximated by some linear combination of a part of training samples, called “significant nodes”, is made. If the “significant nodes” are found out, the novel algorithm on kernel Fisher discriminant analysis will be superior to the naive one in classification efficiency. In this paper, a recursive algorithm for selecting “significant nodes”, is developed in detail. Experiments show that the novel algorithm is effective and much efficient in classifying.  相似文献   

6.
In this paper, an efficient similarity measure method is proposed for printed circuit board (PCB) surface defect detection. The advantage of the presented approach is that the measurement of similarity between the scene image and the reference image of PCB surface is taken without computing image features such as eigenvalues and eigenvectors. In the proposed approach, a symmetric matrix is calculated using the companion matrices of two compared images. Further, the rank of a symmetric matrix is used as similarity measure metric for defect detection. The numerical value of rank is zero for the defectless images and distinctly large for defective images. It is reliable and well tolerated to local variations and misalignment. The various experiments are carried out on the different PCB images. Moreover, the presented approach is tested in the presence of varying illumination and noise effect. Experimental results have shown the effectiveness of the proposed approach for detecting and locating the local defects in a complicated component-mounted PCB images.  相似文献   

7.
In this article an efficient algorithm for computation of the manipulator inertia matrix is presented. The algorithm is derived based on Newton's and Euler's laws governing the motion of rigid bodies. Using spatial notations, the algorithm leads to the definition of the composite rigid-body spatial inertia which is a spatial representation of the notion of augmented body. The equations resulting from this algorithm are derived in a coordinate-free form. The choice of the coordinate frame for projection of the coordinate-free equations, that is, the intrinsic equations, is discussed by analyzing the vectors and the tensors involved in the final equations. Previously proposed algorithms, the physical interpretations leading to their derivation, and the redundancy in their computations are analyzed. The developed algorithm achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a suitable choice of coordinate frame for projection of the intrinsic equations.  相似文献   

8.
Fundamental matrix estimation for wide baseline images is significantly difficult due to the fact that the proportion of inliers in putative correspondences is generally very low. Traditional robust fundamental matrix estimation methods, such as RANSAC, will encounter the problems of computational inefficiency and low accuracy when outlier ratio is high. In this paper, a novel robust estimation method called inlier set sample optimization is proposed to solve these problems. First, a one-class support vector machine-based preselection algorithm is performed to efficiently select a candidate inlier set from putative SIFT correspondences according to distribution consistency of features in location, scale and orientation. Then, the quasi-optimal inlier set is refined iteratively by maximizing a soft decision objective function. Finally, fundamental matrix is estimated with the optimal inlier set. Experimental results show that the proposed method is superior to several state-of-the-art robust methods in speed, accuracy and stability and is applicable to wide baseline images with large differences.  相似文献   

9.
An evaluation algorithm for univariate polynomials is presented which yields the function values for a sequence of equidistant points. The method is based on a formula which relates the forward differences with step size λh (λ a positive integer) to forward differences with step sizeh. The new method needs about half as many essential operations as Horner's applied to each point separately. It is also compared with a third method from literature which is faster yet less accurate.  相似文献   

10.
Recommender systems are used to suggest items to users based on their interests. They have been used widely in various domains, including online stores, web advertisements, and social networks. As part of their process, recommender systems use a set of similarity measurements that would assist in finding interesting items. Although many similarity measurements have been proposed in the literature, they have not concentrated on actual user interests. This paper proposes a new efficient hybrid similarity measure for recommender systems based on user interests. This similarity measure is a combination of two novel base similarity measurements: the user interest–user interest similarity measure and the user interest–item similarity measure. This hybrid similarity measure improves the existing work in three aspects. First, it improves the current recommender systems by using actual user interests. Second, it provides a comprehensive evaluation of an efficient solution to the cold start problem. Third, this similarity measure works well even when no corated items exist between two users. Our experiments show that our proposed similarity measure is efficient in terms of accuracy, execution time, and applicability. Specifically, our proposed similarity measure achieves a mean absolute error (MAE) as low as 0.42, with 64% applicability and an execution time as low as 0.03 s, whereas the existing similarity measures from the literature achieve an MAE of 0.88 at their best; these results demonstrate the superiority of our proposed similarity measure in terms of accuracy, as well as having a high applicability percentage and a very short execution time.  相似文献   

11.
Rock art is an archaeological term for human-made markings on stone, including carved markings, known as petroglyphs, and painted markings, known as pictographs. It is believed that there are millions of petroglyphs in North America alone, and the study of this valued cultural resource has implications even beyond anthropology and history. Surprisingly, although image processing, information retrieval and data mining have had a large impact on many human endeavors, they have had essentially zero impact on the study of rock art. In this work we identify the reasons for this, and introduce a novel distance measure and algorithms which allow efficient and effective data mining of large collections of rock art.  相似文献   

12.
This paper addresses the problem of transductive learning of the kernel matrix from a probabilistic perspective. We define the kernel matrix as a Wishart process prior and construct a hierarchical generative model for kernel matrix learning. Specifically, we consider the target kernel matrix as a random matrix following the Wishart distribution with a positive definite parameter matrix and a degree of freedom. This parameter matrix, in turn, has the inverted Wishart distribution (with a positive definite hyperparameter matrix) as its conjugate prior and the degree of freedom is equal to the dimensionality of the feature space induced by the target kernel. Resorting to a missing data problem, we devise an expectation-maximization (EM) algorithm to infer the missing data, parameter matrix and feature dimensionality in a maximum a posteriori (MAP) manner. Using different settings for the target kernel and hyperparameter matrices, our model can be applied to different types of learning problems. In particular, we consider its application in a semi-supervised learning setting and present two classification methods. Classification experiments are reported on some benchmark data sets with encouraging results. In addition, we also devise the EM algorithm for kernel matrix completion. Editor: Philip M. Long  相似文献   

13.
14.
Jacobi-based algorithms have attracted attention as they have a high degree of potential parallelism and may be more accurate than QR-based algorithms. In this paper we discuss how to design efficient Jacobi-like algorithms for eigenvalue decomposition of a real normal matrix. We introduce a block Jacobi-like method. This method uses only real arithmetic and orthogonal similarity transformations and achieves ultimate quadratic convergence. A theoretical analysis is conducted and some experimental results are presented.  相似文献   

15.
Classic kernel principal component analysis (KPCA) is less computationally efficient when extracting features from large data sets. In this paper, we propose an algorithm, that is, efficient KPCA (EKPCA), that enhances the computational efficiency of KPCA by using a linear combination of a small portion of training samples, referred to as basic patterns, to approximately express the KPCA feature extractor, that is, the eigenvector of the covariance matrix in the feature extraction. We show that the feature correlation (i.e., the correlation between different feature components) can be evaluated by the cosine distance between the kernel vectors, which are the column vectors in the kernel matrix. The proposed algorithm can be easily implemented. It first uses feature correlation evaluation to determine the basic patterns and then uses these to reconstruct the KPCA model, perform feature extraction, and classify the test samples. Since there are usually many fewer basic patterns than training samples, EKPCA feature extraction is much more computationally efficient than that of KPCA. Experimental results on several benchmark data sets show that EKPCA is much faster than KPCA while achieving similar classification performance.  相似文献   

16.
核偏最小二乘(KPLS)是一种多元统计方法, 广泛应用于过程监控, 然而, KPLS采用斜交分解, 导致质量相关空间存在冗余信息易引发误报警. 因此, 本文提出了高效核偏最小二乘(EKPLS)模型, 所提方法通过奇异值分解(SVD)将核矩阵正交分解为质量相关空间和质量无关空间, 有效降低质量相关空间中的冗余信息, 并采用主成分分析(PCA)按方差大小将质量相关空间分解为质量主空间和质量次空间. 此外, 为进一步降低由质量无关故障引发的误报警, 提出基于质量估计的正交信号修正(OSC)预处理方法, 并结合EKPLS模型提出了OSC-EKPLS算法. OSCEKPLS通过质量估计值对被测数据进行OSC预处理, 降低了计算复杂度和误报率. 最后, 通过数值仿真和田纳西–伊斯曼过程验证了OSC-EKPLS具有良好的故障检测性和更低的误报率.  相似文献   

17.
We propose an algorithm for the numerical evaluation of convolution integrals of the form 0xk(x−y)f(y,x) dy, for x∈[0,X]. Our method is especially suitable in situations where the fundamental interval [0, X] is very long and the kernel function k is expensive to calculate. Separate versions are provided where the forcing function f is known in advance, and where it must be determined step-by-step along the solution path. These methods are efficient with respect to both run time and memory requirements.  相似文献   

18.
There has been great progress from the traditional allocation algorithms designed for small memories to more modern algorithms exemplified by McKusick's and Karels' allocator (McKusick MK, Karels MJ. Design of a general purpose memory allocator for the 4.3BSD UNIX kernel. In USENIX Conference Proceedings, Berkeley, CA, June 1988). Nonetheless, none of these algorithms have been designed to meet the needs of UNIX kernels supporting commercial data‐processing applications in a shared‐memory multiprocessor environment. On a shared‐memory multiprocessor, memory is a global resource. Therefore, allocator performance depends on synchronization primitives and manipulation of shared data as well as on raw CPU speed. Synchronization primitives and access to shared data depend on system bus interactions. The speed of system buses has not kept pace with that of CPUs, as witnessed by the ever‐larger caches found on recent systems. Thus, the performance of synchronization primitives and of memory allocators that use them have not received the full benefit of increased CPU performance. An earlier paper (McKenney PE, Slingwine J. Efficient kernel memory allocation on shared‐memory multiprocessors. In USENIX Conference Proceedings, Berkeley, CA, February 1993), describes an allocator designed to meet this situation. This article reviews the motivation for and design of the allocator and presents the experience gained during the seven years that the allocator has been in production use. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
The support vector clustering (SVC) algorithm consists of two main phases: SVC training and cluster assignment. The former requires calculating Lagrange multipliers and the latter requires calculating adjacency matrix, which may cause a high computational burden for cluster analysis. To overcome these difficulties, in this paper, we present an improved SVC algorithm. In SVC training phase, an entropy-based algorithm for the problem of calculating Lagrange multipliers is proposed by means of Lagrangian duality and the Jaynes’ maximum entropy principle, which evidently reduces the time of calculating Lagrange multipliers. In cluster assignment phase, the kernel matrix is used to preliminarily classify the data points before calculating adjacency matrix, which effectively reduces the computing scale of adjacency matrix. As a result, a lot of computational savings can be achieved in the improved algorithm by exploiting the special structure in SVC problem. Validity and performance of the proposed algorithm are demonstrated by numerical experiments.  相似文献   

20.
Face recognition and verification systems are vulnerable to video spoofing attacks. In this paper, we present a diffusion-based kernel matrix model for face liveness detection. We use the anisotropic diffusion to enhance the edges of each frame in a video, and the kernel matrix model to extract the video features which we call the diffusion kernel (DK) features. The DK features reflect the inner correlation of the face images in the video. We employ a generalized multiple kernel learning method to fuse the DK features and the deep features extracted from convolution neural networks to achieve better performance. Our experimental evaluation on two publicly available datasets shows that the proposed method outperforms the state-of-art face liveness detection methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号