首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The kernel minimum squared error (KMSE) expresses the feature extractor as a linear combination of all the training samples in the high-dimensional kernel space. To extract a feature from a sample, KMSE should calculate as many kernel functions as the training samples. Thus, the computational efficiency of the KMSE-based feature extraction procedure is inversely proportional to the size of the training sample set. In this paper, we propose an efficient kernel minimum squared error (EKMSE) model for two-class classification. The proposed EKMSE expresses each feature extractor as a linear combination of nodes, which are a small portion of the training samples. To extract a feature from a sample, EKMSE only needs to calculate as many kernel functions as the nodes. As the nodes are commonly much fewer than the training samples, EKMSE is much faster than KMSE in feature extraction. The EKMSE can achieve the same training accuracy as the standard KMSE. Also, EKMSE avoids the overfitting problem. We implement the EKMSE model using two algorithms. Experimental results show the feasibility of the EKMSE model.  相似文献   

2.
目的 相干斑的存在严重影响了极化合成孔径雷达(PolSAR)的影像质量.对相干斑的抑制是使用SAR数据的必不可少的预处理程序.提出一种基于非局部加权的线性最小均方误差(LMMSE)滤波器的极化SAR滤波的方法.方法 该方法的主要过程是利用非局部均值的理论来获取LMMSE估计器中像素样本的权重.同时,在样本像素的选取过程中,利用待处理像素的极化散射特性和邻域块的异质性来排除不相似像素以加速算法,同时达到保持点目标和自适应调节块窗口大小的目的.结果 模拟影像和真实影像上进行的实验结果表明,采用这种方法滤波后影像的质量得到明显改善.和传统的LMMSE算法相比,无论是单视的影像还是多视的影像,本文方法去噪结果的等效视数都高出8视以上;峰值信噪比也提升了5.8 dB.同时,去噪后影像分类的总体精度也达到了83%以上,该方法的运行效率也比非局部均值算法有了较大提升.结论 本文方法不仅能够有效抑制相干斑噪声,还能较好地保持边缘和细节信息以及极化散射特性.这将会为后续高效利用SAR数据提供保障.  相似文献   

3.
In this paper, we propose new adaptive algorithms for the extraction and tracking of the least (minor) or eventually, principal eigenvectors of a positive Hermitian covariance matrix. The main advantage of our proposed algorithms is their low computational complexity and numerical stability even in the minor component analysis case. The proposed algorithms are considered fast in the sense that their computational cost is O(np) flops per iteration where n is the size of the observation vector and p<n is the number of eigenvectors to estimate.We consider OJA-type minor component algorithms based on the constraint and non-constraint stochastic gradient technique. Using appropriate fast orthogonalization procedures, we introduce new fast algorithms that extract the minor (or principal) eigenvectors and guarantee good numerical stability as well as the orthogonality of their weight matrix at each iteration. In order to have a faster convergence rate, we propose a normalized version of these algorithms by seeking the optimal step-size. Our algorithms behave similarly or even better than other existing algorithms of higher complexity as illustrated by our simulation results.  相似文献   

4.
This paper considers parameter estimation for nonlinear model using median squared error (MSE) criterion, which is limited to linear model in the past. It is shown that applying MSE, the essence of estimating parameters for hinging hyperplanes (HH) and linear model are the same. Motivated by this fact, MSE estimation is discussed for HH. A local optimality condition is given and based on this condition, an algorithm using linear programming technique is proposed. Numerical experiments show the good performance of the proposed estimation strategy and algorithm.  相似文献   

5.
针对传统相关旋转(CR)算法放大噪声的问题,利用拉格朗日函数最小化接收信号与发射信号间的误差,通过贝叶斯理论和信道统计特性计算不完美信道状态信息,设计了信道状态信息(CSI)完美和不完美两种情况下基于最小均方误差(MMSE)准则的CR预编码算法的系统方案。分析与仿真结果表明,与传统迫零(ZF)准则下的CR算法相比较:信道状态信息完美时设计方案在同一信噪比(SNR)下误码率性能提高2~3dB;信道状态信息不完美时系统误码性能也有显著的提高。  相似文献   

6.
Watermarking is popular method for effective copyright protection of digital data. The invisible image watermarking is a type of watermarking which is used to conceal secret information in a cover image. In any watermarking system, there is a trade-off between imperceptibility and payload of the data to be hidden in the cover medium. Therefore, only a limited amount of data can be concealed in the cover image while watermark remain imperceptible. This paper proposes a new method of adaptive blind digital image data hiding in Discrete Cosine Transform (DCT) domain using Minimum Error Lease Significant Bit Replacement (MELSBR) method and Genetic Algorithm (GA). In the proposed method, the secret image is embedded into the DCT transformed cover image. Initially, the host and the secret image both are partitioned into equal sized blocks. Then for each secret image block, host image block is intelligently selected through GA and is embedded using MELSBR method. GA helps in identifying the target cover image blocks such that with the LSB embedding, both visual quality of the cover image and imperceptibility of the secret image remain least affected. In watermark extraction process, the watermarked image is decomposed into equal sized blocks and then using the Jigsaw Puzzle Solver (JPS) the secret image is reconstructed. In spite of doubling the payload of the data to hide, the experimental results have shown better imperceptibility and robustness results of the proposed method as compared to the current approaches in this domain.  相似文献   

7.
This paper describes an optimal ripple-free deadbeat control strategy for single-input–single-output (SISO) linear sampled data plants. The cost function to be minimized is a linear combination of a time-weighted cumulative term that penalizes the tracking error, that is, an integral of time squared error (ITSE) cost term, and a cumulative term which penalizes the control signal deviations from its steady-state value. The optimization problem turns out to be convex, and closed-form solutions are obtained. An example is included to illustrate our results.  相似文献   

8.
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.  相似文献   

9.
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.  相似文献   

10.
为了能够较好地处理芯片图像,尽可能准确地提取出描述基因样点的数据信息,采用了最小误差阈值的分割算法.该方法在假设目标和背景的分布服从混合正态分布的前提下,设定了最小误差分类目标函数,通过求得使目标函数值最小的最佳分割阈值,实现基因样点和背景图像的分割.针对分割出来的基因样点图像提取特征数据,最后对这些数据进行聚类分析,进而对实验样点进行分类.在实验中应用该方法分析了2组基因芯片图像,基因样点的分类效果较好,验证了该基因芯片分析方法的可行性.  相似文献   

11.
针对传统的非负矩阵分解(NMF)应用于聚类时,没有同时考虑到鲁棒性和稀疏性,导致聚类性能较低的问题,提出了基于核技巧和超图正则的稀疏非负矩阵分解算法(KHGNMF)。首先,在继承核技巧的良好性能的基础上,用L2,1范数改进标准非负矩阵分解中的F范数,并添加超图正则项以尽可能多地保留原始数据间的内在几何结构信息;其次,引入L2,1/2伪范数和L1/2正则项作为稀疏约束合并到NMF模型中;最后,提出新算法并将新算法应用于图像聚类。在6个标准的数据集上进行验证,实验结果表明,相对于非线性正交图正则非负矩阵分解方法,KHGNMF使聚类性能(精度和归一化互信息)成功地提升了39%~54%,有效地改善和提高了算法的稀疏性和鲁棒性,聚类效果更好。  相似文献   

12.
In Minimum Error Rate Training (MERT), Bleu is often used as the error function, despite the fact that it has been shown to have a lower correlation with human judgment than other metrics such as Meteor and Ter. In this paper, we present empirical results in which parameters tuned on Bleu may lead to sub-optimal Bleu scores under certain data conditions. Such scores can be improved significantly by tuning on an entirely different metric altogether, e.g. Meteor, by 0.0082 Bleu or 3.38% relative improvement on the WMT08 English–French data. We analyze the influence of the number of references and choice of metrics on the result of MERT and experiment on different data sets. We show the problems of tuning on a metric that is not designed for the single reference scenario and point out some possible solutions.  相似文献   

13.
This paper focuses on the problem of how data representation influences the generalization error of kernel based learning machines like support vector machines (SVM) for classification. Frame theory provides a well founded mathematical framework for representing data in many different ways. We analyze the effects of sparse and dense data representations on the generalization error of such learning machines measured by using leave-one-out error given a finite amount of training data. We show that, in the case of sparse data representations, the generalization error of an SVM trained by using polynomial or Gaussian kernel functions is equal to the one of a linear SVM. This is equivalent to saying that the capacity of separating points of functions belonging to hypothesis spaces induced by polynomial or Gaussian kernel functions reduces to the capacity of a separating hyperplane in the input space. Moreover, we show that, in general, sparse data representations increase or leave unchanged the generalization error of kernel based methods. Dense data representations, on the contrary, reduce the generalization error in the case of very large frames. We use two different schemes for representing data in overcomplete systems of Haar and Gabor functions, and measure SVM generalization error on benchmarked data sets.  相似文献   

14.
The mathematical complexity of the minimum mean square estimators made inevitable the consideration of suboptimal solutions, such as the linear minimum mean square (m.m.s.) estimators. The compromise between performance and complexity can be, in general, less serious if the estimator that will substitute the optimum one is polynomial. If the minimum mean square estimator happens to be equal to a polynomial one, the polynomial substitution does not involve any compromise with respect to performance. Balakrishnan found a necessary and sufficient condition satisfied by the joint characteristic functions of observations and variable to be estimated, so that the m.m.s. estimate is a polynomial. The equivalent moment relationships in this case were found in the present paper. A matrix expression of the error difference from two different m.m.s. polynomial estimators was also found. This form involves much fewer calculations than required for finding separately the two errors.  相似文献   

15.
提出了基于霍夫变换直线检测的稀疏盲分离算法,直接利用信号数据,提取其在空间分布的直线方向,消除了初始化的随机性,降低了对信号稀疏度的要求,精确地估计出了混合矩阵,从而达到分离出源信号的目的.给出仿真结果,验证了算法的有效性.  相似文献   

16.
提出了一种基于最小分类错误率和Parzen窗的降维方法,利用Parzen窗估计数据的概率密度分布;通过计算各特征维度下的分类错误率,判断该特征维度对目标分类的贡献度;依据贡献度大小进行特征维度选择从而达到降维的目的。  相似文献   

17.
Doi E  Lewicki MS 《Neural computation》2011,23(10):2498-2510
Robust coding has been proposed as a solution to the problem of minimizing decoding error in the presence of neural noise. Many real-world problems, however, have degradation in the input signal, not just in neural representations. This generalized problem is more relevant to biological sensory coding where internal noise arises from limited neural precision and external noise from distortion of sensory signal such as blurring and phototransduction noise. In this note, we show that the optimal linear encoder for this problem can be decomposed exactly into two serial processes that can be optimized separately. One is Wiener filtering, which optimally compensates for input degradation. The other is robust coding, which best uses the available representational capacity for signal transmission with a noisy population of linear neurons. We also present spectral analysis of the decomposition that characterizes how the reconstruction error is minimized under different input signal spectra, types and amounts of degradation, degrees of neural precision, and neural population sizes.  相似文献   

18.
基于核最优变换与聚类中心的雷达目标识别   总被引:1,自引:0,他引:1  
抽取有效鉴别特征是雷达一维高分辨距离像识别的关键.基干统计学习理论的核化原理,提出一种新的鉴别特征提取方法--核最优变换与聚类中心算法.该算法通过非线性变换,将数据映射到核空间,在核空间执行最优变换与聚类中心算法,能够提取一维距离像的稳健非线性鉴别特征.同时,基于训练样本在核空间所张成的子空间的一组基,给出一种快速计算方法,提高了特征提取速度.基于微波暗室实测数据的实验表明了该方法的有效性.  相似文献   

19.
A novel sparse kernel density estimation method is proposed based on the sparse Bayesian learning with random iterative dictionary preprocessing. Using empirical cumulative distribution function as the response vectors, the sparse weights of density estimation are estimated by sparse Bayesian learning. The proposed iterative dictionary learning algorithm is used to reduce the number of kernel computations, which is an essential step of the sparse Bayesian learning. With the sparse kernel density estimation, the quadratic Renyi entropy based normalized mutual information feature selection method is proposed. The simulation of three examples demonstrates that the proposed method is comparable to the typical Parzen kernel density estimations. And compared with other state-of-art sparse kernel density estimations, our method also has a shown very good performance as to the number of kernels required in density estimation. For the last example, the Friedman data and Housing data are used to show the property of the proposed feature variables selection method.  相似文献   

20.
以河北省石家庄市2003年和2004年的专题制图仪(TM)遥感影像为例,针对各波段光谱特征,提出了一种基于地物特征增强的变化检测方法.在两期影像上对各类地物采样并计算样本在不同波段的均值、标准差等特征量,以确定波段组合运算的加权系数,计算特征增强图像,实现两期影像中所指定地物类型的特征增强;计算两期特征增强影像的差异影像;使用最小误差分割法获取变化检测结果.通过对比实验可知:方法提取变化区域总体精度达到90%,相对于传统的基于主成分分析(PCA)的变化检测方法,具有较高的检测精度,较好的可行性与适应性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号