共查询到20条相似文献,搜索用时 31 毫秒
1.
Formulated as a linear inverse problem, spectral estimation is particularly underdetermined when only short data sets are available. Regularization by penalization is an appealing nonparametric approach to solve such ill-posed problems. Following Sacchi et al. (see ibid., vol.46, no.1, p.32-38, 1998), we first address line spectra recovering in this framework. Then, we extend the methodology to situations of increasing difficulty: the case of smooth spectra and the case of mixed spectra, i.e., peaks embedded in smooth spectral contributions. The practical stake of the latter case is very high since it encompasses many problems of target detection and localization from remote sensing. The stress is put on adequate choices of penalty functions: following Sacchi et al., separable functions are retained to retrieve peaks, whereas Gibbs-Markov potential functions are introduced to encode spectral smoothness. Finally, mixed spectra are obtained from the conjunction of contributions, each one bringing its own penalty function. Spectral estimates are defined as minimizers of strictly convex criteria. In the cases of smooth and mixed spectra, we obtain nondifferentable criteria. We adopt a graduated nondifferentiability approach to compute an estimate. The performance of the proposed techniques is tested on the well-known Kay and Marple (1982) example 相似文献
2.
A modified fuzzy C-means algorithm for bias field estimation and segmentation of MRI data 总被引:38,自引:0,他引:38
Ahmed MN Yamany SM Mohamed N Farag AA Moriarty T 《IEEE transactions on medical imaging》2002,21(3):193-199
In this paper, we present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm. 相似文献
3.
In the last half decade, fast methods of magnetic resonance imaging have led to the possibility, for the first time, of non-invasive dynamic brain imaging. This has led to an explosion of work in the Neurosciences. From a signal processing viewpoint the problems are those of nonlinear spatio-temporal system identification. In this paper, we develop new methods of identification using novel spatial regularization. We also develop a new model comparison technique and use that to compare our method with existing techniques on some experimental data. 相似文献
4.
Victor Gonzalez-Huitron Volodymyr Ponomaryov Eduardo Ramos-Diaz Sergiy Sadovnychiy 《Signal, Image and Video Processing》2018,12(2):231-238
A novel framework for sparse and dense disparity estimation was designed, and the proposed framework has been implemented in CPU and GPU for a parallel processing capability. The Census transform is applied in the first stage, and then, the Hamming distance is later used as similarity measure in the stereo matching stage followed by a matching consistency check. Next, a disparity refinement is performed on the sparse disparity map via weighted median filtering and color K-means segmentation, in addition to clustered median filtering to obtain the dense disparity map. The results are compared with state-of-the-art frameworks, demonstrating this process to be competitive and robust. The quality criteria used are structural similarity index measure and percentage of bad pixels (B) for objective results and subjective perception via human visual system demonstrating better performance in maintaining fine features in disparity maps. The comparisons include processing times and running environments, to place each process into context. 相似文献
5.
多视视频加深度(MVD,multi-view video plus depth)的3D视频格式中,深度图提供视频的场景几何信息,其不在终端成像显示而是通过基于深度图像的绘制(DIBR)技术用于绘制虚拟视图像。在深度图的压缩编码过程中,深度图的失真会引起绘制的虚拟视图像的失真。深度图用于绘制而不用于显示的特性使得准确估计深度图绘制的虚拟视失真可以提高深度图编码的率失真性能。本文分析了不同的深度图失真引起的不同的虚拟视失真,进而提出一种估计深度图失真引起虚拟视失真的指数模型,并将模型用于深度图编码的率失真优化(RDO)中。实验结果表明,本文提出的模型可以准确估计深度图失真引起的虚拟视失真,提高深度图编码性能,相比于HTM的VSO可以降低约10%的编码时间,并且虚拟视质量略优于HTM。 相似文献
6.
We propose an iterative rake receiver structure using an optimum semi-blind channel estimation algorithm for ds-cdma mobile communication systems. This receiver performs an iterative estimation of the channel according to the maximum a posteriori criterion, using the expectation-maximization algorithm. This estimation process requires a convenient representation of the discrete multipath fading channel based on the Karhunen-Loève orthogonal expansion theorem. The rake receiver uses pilot as well as unknown control and data symbols optimally for improving channel estimation quality. Moreover, it can take into account the coded structure of all unknown transmitted symbols when channel estimation quality is poor or unsatisfactory. The validity of the proposed method is highlighted by simulation results obtained for the FDD mode of the umts interface. 相似文献
7.
A frequently encountered problem in reliability verification is described, and several approaches to managing this problem are presented. The general problem is the desire to use field reliability data rather than laboratory data to evaluate equipment performance despite the deficiencies of reported field data. This problem is most severe when the equipment age is unknown at the start of a period of equipment monitoring and the tracking of the equipment is stopped after a relatively short observation interval. The use of the resulting data set to construct life distribution parameter estimate is described here. Several heuristic parameter estimation algorithms are defined. The set of proposed methods is reduced on the basis of tractability concerns, and the remaining method are applied to simulated data sets. For comparison, the corresponding unabbreviated data sets are analyzed using established methods. The comparison are made for life data that is negative exponential and for data that is Weibull for both increasing and decreasing hazard cases 相似文献
8.
9.
A novel approach to image-based modelling and rendering is proposed. A set of randomly placed panoramas and omnidirectional depth information of the surrounding scene are used to generate an output image at a vantage viewpoint. Implementation results show that the proposed approach achieves smooth walkthrough at a realtime 相似文献
10.
基于合成孔径雷达(SAR)图像的海面风场估计已经得到广泛认可。多数风速反演算法是以估计的风向、校正的δvv为先验条件,应用海风模型计算而得的。在相同风向的情况下,应用不同的海风模型会得到不同的风速反演值,因此选择合适的模型是风场估计的关键。同时,风向数据的精确度也很重要,即使不大的误差也会给风速的反演结果带来明显偏差。为解决上述问题这里提出一种不需要预先已知风向数据的风场估计算法。该算法将基于海洋SAR图像中风浪的条纹信息,以及风浪条纹生成的自相关函数的周期性估计风速数据,同时由风浪条纹的最短周期方向估计风向数据,从而估计出完整的风场矢量。仿真结果显示,该算法对风速和风向数据有较高的估计精度。 相似文献
11.
近年来高超声速飞行器的研究受到世界各国的重视,具有重大的军事意义,其中导航、制导和控制是高超声速研究的关键技术。鉴于星光导航有着抗干扰能力强、导航精度高、自助式导航等特点,文中主要研究了高超声速飞行器使用星光导航方法的星图复原算法。考虑到高超带来的气动光学效应,在分析高速层流以及湍流流场的基础上,应用增量Wiener滤波器和有限支持域上的盲解卷积复原算法进行退化星图的复原。针对高速飞行器星光导航对复原星图的要求,仿真分析了复原星图的质心偏差及识别特征量变化。仿真结果显示,有限支持域上的盲解卷积复原算法精度较高,且经复原后的星图,能快速被高速飞行器星光导航系统正确识别。 相似文献
12.
Chun Ruan Scott Yang Geoffrey D Clarke Maxwell R Amurao Scott R Partyka Yong C Bradley Kenneth Cusi 《IEEE transactions on information technology in biomedicine》2006,10(3):574-580
Magnetic resonance first-pass perfusion imaging offers a noninvasive method for the rapid, accurate, and reproducible assessment of cardiac function without ionizing radiation. Quantitative or semiquantitative analysis of changes in signal intensity (SI) over the whole image sequence yields a more efficient analysis than direct visual inspection. In this paper, a method to generate maximum up-slope myocardial perfusion maps is presented. The maximum up-slope is defined by comparison of the SI variations using frame-to-frame analysis. A map of first-pass transit of the contrast agent is constructed pixel by pixel using a linear curve fitting model. The proposed method was evaluated using data from eight subjects. The data from the parametric maps agreed well with those obtained from traditional, manually derived region-of-interest methods as shown through ANOVA. The straightforward implementation and increase in image analysis efficiency resulting from this method suggests that it may be useful for clinical practice. 相似文献
13.
Hoge W.S. Miller E.L. Lev-Ari H. Brooks D.H. Panych L.P. 《IEEE transactions on image processing》2002,11(10):1168-1178
Dynamic magnetic resonance imaging (MRI) refers to the acquisition of a sequence of MRI images to monitor temporal changes in tissue structure. We present a method for the estimation of dynamic MRI sequences based on two complimentary strategies: an adaptive framework for the estimation of the MRI images themselves, and an adaptive method to tailor the MRI system excitations for each data acquisition. We refer to this method as the doubly adaptive temporal update method (DATUM) for dynamic MRI. Analysis of the adaptive image estimate framework shows that calculating the optimal system excitations for each new image requires complete knowledge of the next image in the sequence. Since this is not realizable, we introduce a linear predictor to aid in determining appropriate excitations. Simulated examples using real MRI data are included to illustrate that the doubly adaptive strategy can provide estimates with lower steady state error than previously proposed methods and also the ability to recover from dramatic changes in the image sequence. 相似文献
14.
In this paper, we propose a single image deblurring algorithm to remove spatially variant defocus blur based on the estimated blur map. Firstly, we estimate the blur map from a single image by utilizing the edge information and K nearest neighbors (KNN) matting interpolation. Secondly, the local kernels are derived by segmenting the blur map according to the blur amount of local regions and image contours. Thirdly, we adopt a BM3D-based non-blind deconvolution algorithm to restore the latent image. Finally, ringing artifacts and noise are detected and removed, to obtain a high quality in-focus image. Experimental results on real defocus blurred images demonstrate that our proposed algorithm outperforms some state-of-the-art approaches. 相似文献
15.
Prost R. Yi Ding Baskurt A. Benoit-Cattin H. 《IEEE transactions on image processing》1999,8(4):564-570
Two enhanced subband coding schemes using a regularized image restoration technique are proposed: the first controls the global regularity of the decompressed image; the second extends the first approach at each decomposition level. The quantization scheme incorporates scalar quantization (SQ) and pyramidal lattice vector quantization (VQ) with both optimal bit and quantizer allocation. Experimental results show that both the block effect due to VQ and the quantization noise are significantly reduced. 相似文献
16.
Online Regularized Classification Algorithms 总被引:2,自引:0,他引:2
《IEEE transactions on information theory / Professional Technical Group on Information Theory》2006,52(11):4775-4788
This paper considers online classification learning algorithms based on regularization schemes in reproducing kernel Hilbert spaces associated with general convex loss functions. A novel capacity independent approach is presented. It verifies the strong convergence of the algorithm under a very weak assumption of the step sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Explicit learning rates with respect to the misclassification error are given in terms of the choice of step sizes and the regularization parameter (depending on the sample size). Error bounds associated with the hinge loss, the least square loss, and the support vector machine q-norm loss are presented to illustrate our method 相似文献
17.
18.
建立了MRI(磁共振成像)平面梯度磁场的纵向和横向梯度模型,并对其进行分析计算,得到均匀的MRI平面梯度磁场.重点分析了由两组不同线圈匝数的"麦克斯韦对"绕组组成的最高到7阶的纵向梯度模型和相距2z0的12个串联供电K匝线圈的矩形绕组组成的横向梯度模型,在此基础上对模型的基本参数进一步优化,得到了更大线性偏差<1 %的高线性度梯度场.分析表明:模型中的3、5、7阶对磁场梯度的线性均有影响,弱化高阶奇数导数对磁感应强度的影响可以得到更加均匀的线性梯度场.结果表明:应用相同的优化方法,纵向梯度模型的优化效果优于横向梯度模型. 相似文献
19.
A statistical method for detecting activated pixels in functional MRI (fMIRI) data is presented. In this method, the fMRI time series measured at each pixel is modeled as the sum of a response signal which arises due to the experimentally controlled activation-baseline pattern, a nuisance component representing effects of no interest, and Gaussian white noise. For periodic activation-baseline patterns, the response signal is modeled by a truncated Fourier series with a known fundamental frequency but unknown Fourier coefficients. The nuisance subspace is assumed to be unknown. A maximum likelihood estimate is derived for the component of the nuisance subspace which is orthogonal to the response signal subspace. An estimate for the order of the nuisance subspace is obtained from an information theoretic criterion. A statistical test is derived and shown to be the uniformly most powerful (UMP) test invariant to a group of transformations which are natural to the hypothesis testing problem. The maximal invariant statistic used in this test has an F distribution. The theoretical F distribution under the null hypothesis strongly concurred with the experimental frequency distribution obtained by performing null experiments in which the subjects did not perform any activation task. Application of the theory to motor activation and visual stimulation fMRI studies is presented. 相似文献
20.
A regularized color clustering algorithm is proposed to solve the color clustering problem in medical image database. By incorporating both measures of cluster separability and cluster compactness, regularized color clustering allows the automatic extraction of significant color groups with varying populations. Experimental results in different color spaces show that the regularized color clustering gives superior results in extracting significant distinct/abnormal color clusters without significant increases in cluster compactness. Furthermore, results of color clustering in different color spaces show that the LUV color space is more suitable for color clustering. Methods for selecting the regularization constants have also been suggested. 相似文献