首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data fusion provides tools for solving problems which are characterized by distributed and diverse information sources. In this paper, the authors focus on the problem of extracting features such as image discontinuities from both synthetic and real images. Since edge detection and surface reconstruction are ill-posed problems in the sense of Hadamard, Tikhonov's regularization paradigm is proposed as the basic tool for solving this inversion problem and restoring well-posedness. The proposed framework includes: (1) a systematic view of oneand two-dimensional regularization; (2) extension of the standard Tikhonov regularization method by allowing space-variant regularization parameters; and (3) further extension of the regularization paradigm by adding multiple data sources to allow for data fusion. The theoretical approach is complemented by developing a series of algorithms, then solving the early vision problems of color edge detection and surface reconstruction. An evaluation of these methods reveals that this new analytical data fusion technique output is consistently better than each of the individual RGB edge maps, and noisy corrupted images are reconstructed into smooth noiseless surfaces  相似文献   

2.
Generation of anisotropic-smoothness regularization filters for EIT   总被引:3,自引:0,他引:3  
In the inverse conductivity problem, as in any ill-posed inverse problem, regularization techniques are necessary in order to stabilize inversion. A common way to implement regularization in electrical impedance tomography is to use Tikhonov regularization. The inverse problem is formulated as a minimization of two terms: the mismatch of the measurements against the model, and the regularization functional. Most commonly, differential operators are used as regularization functionals, leading to smooth solutions. Whenever the imaged region presents discontinuities in the conductivity distribution, such as interorgan boundaries, the smoothness prior is not consistent with the actual situation. In these cases, the reconstruction is enhanced by relaxing the smoothness constraints in the direction normal to the discontinuity. In this paper, we derive a method for generating Gaussian anisotropic regularization filters. The filters are generated on the basis of the prior structural information, allowing a better reconstruction of conductivity profiles matching these priors. When incorporating prior information into a reconstruction algorithm, the risk is of biasing the inverse solutions toward the assumed distributions. Simulations show that, with a careful selection of the regularization parameters, the reconstruction algorithm is still able to detect conductivities patterns that violate the prior information. A generalized singular-value decomposition analysis of the effects of the anisotropic filters on regularization is presented in the last sections of the paper.  相似文献   

3.
刘广东  葛新同 《电子学报》2015,43(12):2518-2524
在已有的经验模型中,多极德拜(Debye)模型最适合高精地描述生物组织、土壤、水等媒质的色散特性.为了同时反演这类媒质的电磁参数,本文提出了一种时域逆散射改进技术:分别应用迭代法和吉洪诺夫(Tikhonov)正则化技术克服逆问题的非线性和病态性困难;解析导出了目标泛函关于目标参数的梯度;迭代重建过程中,产生的正演、反演子问题分别选用时域有限差分(FDTD)法、共轭梯度(CG)法求解.噪声环境下,通过两个一维(1-D)的数值算例,初步证实了该技术的可行性和鲁棒性.  相似文献   

4.
闫华  刘琚 《电子学报》2007,35(7):1409-1413
超分辨率图像复原作为第二代图像复原方向,已成为目前国际图像复原界的一个研究热点.一般来说,超分辨率图像复原是一个病态问题,可以结合图像的先验信息,使其成为良态的,这需要有效的规整化算法.但是,规整化参数的选择多数情况是通过经验确定的,且现有的一些计算规整化参数的方法又过于繁琐.本文讨论了亚像素配准误差引入的情况下噪声的统计模型,利用Miller规整的思想给出了简易可行的规整化参数计算方法.这种规整化参数计算方法能够自适应地根据配准误差和观测噪声局部调整由于配准误差导致的失真.仿真结果表明得到的规整化参数能使规整化算法有效收敛.  相似文献   

5.
一种处理分层有耗色散介质的时域逆散射方法   总被引:2,自引:0,他引:2       下载免费PDF全文
刘广东  张业荣 《电子学报》2011,39(12):2856-2862
为了重建分层有耗色散介质的特征参数,我们应用泛函分析和变分法,提出一种时域逆散射新方法.该方法首先以最小二乘准则构造目标函数,将逆问题表示为约束最小化问题;接着应用罚函数法转化为无约束最小化问题;然后基于变分计算导出闭式的拉格朗日(Lagrange)函数关于特征参数的Fréchet导数;最后借助梯度算法和时域有限差分(...  相似文献   

6.
Different regularization techniques used in conjunction with the Gauss-Newton inversion method for electromagnetic inverse scattering problems are studied and classified into two main categories. The first category attempts to regularize the quadratic form of the nonlinear data misfit cost-functional at different iterations of the Gauss-Newton inversion method. This can be accomplished by utilizing penalty methods or projection methods. The second category tries to regularize the nonlinear data misfit cost-functional before applying the Gauss-Newton inversion method. This type of regularization may be applied via additive, multiplicative or additive-multiplicative terms. We show that these two regularization strategies can be viewed from a single consistent framework.  相似文献   

7.
This paper describes a new approach to reconstruction of the conductivity field in electrical impedance tomography. Our goal is to improve the tradeoff between the quality of the images and the numerical complexity of the reconstruction method. In order to reduce the computational load, we adopt a linearized approximation to the forward problem that describes the relationship between the unknown conductivity and the measurements. In this framework, we focus on finding a proper way to cope with the ill-posed nature of the problem, mainly caused by strong attenuation phenomena; this is done by devising regularization techniques well suited to this particular problem. First, we propose a solution which is based on Tikhonov regularization of the problem. Second, we introduce an original regularized reconstruction method in which the regularization matrix is determined by space-uniformization of the variance of the reconstructed conductivities. Both methods are nonsupervised, i.e., all tuning parameters are automatically determined from the measured data. Tests performed on simulated and real data indicate that Tikhonov regularization provides results similar to those obtained with iterative methods, but with a much smaller amount of computations. Regularization using a variance uniformization constraint yields further improvements, particularly in the central region of the unknown object where attenuation is most severe. We anticipate that the variance uniformization approach could be adapted to iterative methods that preserve the nonlinearity of the forward problem. More generally, it appears as a useful tool for solving other severely ill-posed reconstruction problems such as eddy current tomography  相似文献   

8.
Shape-based solutions have recently received attention for certain ill-posed inverse problems. Their advantages include implicit imposition of relevant constraints and reduction in the number of unknowns, especially important for nonlinear ill-posed problems. We apply the shape-based approach to current-injection electrical impedance tomography (EIT) reconstructions. We employ a boundary element method (BEM) based solution for EIT. We introduce two shape models, one based on modified B-splines, and the other based on spherical harmonics, for BEM modeling of shapes. These methods allow us to parameterize the geometry of conductivity inhomogeneities in the interior of the volume. We assume the general shape of piecewise constant inhomogeneities is known but their conductivities and their exact location and shape is not. We also assume the internal conductivity profile is piecewise constant, meaning that each region has a constant conductivity. We propose and test three different regularization techniques to be used with either of the shape models. The performance of our methods is illustrated via both simulations in a digital torso model and phantom experiments when there is a single internal object. We observe that in the noisy environment, either simulated noise or real sources of noise in the experimental study, we get reasonable reconstructions. Since the signal-to-noise ratio (SNR) expected in modern EIT instruments is higher than that used in this study, these reconstruction methods may prove useful in practice.  相似文献   

9.
Edge-preserving tomographic reconstruction with nonlocal regularization   总被引:4,自引:0,他引:4  
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.  相似文献   

10.
Mixture models are often used in the statistical segmentation of medical images. For example, they can be used for the segmentation of structural images into different matter types or of functional statistical parametric maps (SPMs) into activations and nonactivations. Nonspatial mixture models segment using models of just the histogram of intensity values. Spatial mixture models have also been developed which augment this histogram information with spatial regularization using Markov random fields. However, these techniques have control parameters, such as the strength of spatial regularization, which need to be tuned heuristically to particular datasets. We present a novel spatial mixture model within a fully Bayesian framework with the ability to perform fully adaptive spatial regularization using Markov random fields. This means that the amount of spatial regularization does not have to be tuned heuristically but is adaptively determined from the data. We examine the behavior of this model when applied to artificial data with different spatial characteristics, and to functional magnetic resonance imaging SPMs.  相似文献   

11.
袁浩波  杨蒙  党晓杰  王楠 《电子学报》2017,45(10):2549-2554
计算电磁学中矩量法产生的系统矩阵是病态矩阵,使用迭代方法求解时很难收敛,即使采用现有的预条件技术也经常不收敛.本文借用不适定问题求解中的正则化方法的概念,提出采用正则化矩阵作为矩量法中矩阵方程的一个预条件矩阵.这种预条件方法可以直接改善原矩阵的特征值分布,而且不需要额外的空间来存储预条件矩阵.此外,本文提出通过正则化矩阵方程的L曲线的二阶导数的最大值点来确定正则化参数,使得预条件矩阵方程求解的效率最高.数值实验表明,对于高阶矩量法求解电场积分方程或者磁场积分方程时分别产生的矩阵方程,采用常见的预条件迭代方法求解时收敛很慢,但是采用本文的预条件迭代方法却可以较快地收敛.  相似文献   

12.
A constrained disparity estimation method is proposed which uses a directional regularization technique to efficiently preserve edges for stereo image coding. The proposed method smoothes disparity vectors in smooth regions and preserves edges in object boundaries well, without creating an oversmoothing problem. The differential pulse code modulation (DPCM) technique for disparity map coding is used prior to entropy coding, in order to improve the overall coding efficiency. The proposed disparity estimation method can also be applied to intermediate view reconstruction. Intermediate views between a left image and a right image provide reality and natural motion parallax to multiviewers. Intermediate views are synthesized by appropriately exploiting an interpolation or an extrapolation technique according to the characteristics of each region after identifying the regions as occluded regions, normal regions, and regions having ambiguous disparities.The experimental results show that the proposed disparity estimation method gives close matches between a left image and a right image and improves coding efficiency. In addition, we can subjectively confirm that the application of our proposed intermediate view reconstruction method leads to satisfactory intermediate views from a stereo image pair.This work was supported by the Korea Institute of Science and Technology (KIST) under Grant No. 99HI-054.  相似文献   

13.
The inverse black body radiation problem is concerned with the determination of the area temperature distribution of a black body source from spectral measurements of its radiation. Although several inversion approaches have been developed, none of them has overcome the problem of ill-posedness. In this study, Tikhonov's regularization method is applied to the solution of the inverse black body radiation problem. A very simple implementation of this approach together with applications of this method are presented. The effect of the regularization parameter and operator on the results is discussed  相似文献   

14.
We show that electrical impedance tomography (EIT) image reconstruction algorithms with regularization based on the total variation (TV) functional are suitable for in vivo imaging of physiological data. This reconstruction approach helps to preserve discontinuities in reconstructed profiles, such as step changes in electrical properties at interorgan boundaries, which are typically smoothed by traditional reconstruction algorithms. The use of the TV functional for regularization leads to the minimization of a nondifferentiable objective function in the inverse formulation. This cannot be efficiently solved with traditional optimization techniques such as the Newton method. We explore two implementations methods for regularization with the TV functional: the lagged diffusivity method and the primal dual–interior point method (PD-IPM). First we clarify the implementation details of these algorithms for EIT reconstruction. Next, we analyze the performance of these algorithms on noisy simulated data. Finally, we show reconstructed EIT images of in vivo data for ventilation and gastric emptying studies. In comparison to traditional quadratic regularization, TV regulariza tion shows improved ability to reconstruct sharp contrasts.   相似文献   

15.
A new method for myocardial activation imaging   总被引:3,自引:0,他引:3  
Noninvasive images of the myocardial activation sequence are acquired, based on a new formulation of the inverse problem of electrocardiography in terms of the critical points of the ventricular surface activation map. It is shown that the method is stable with respect to substantial amounts of correlated noise common in the measurements and modeling of electrocardiography and that problems associated with conventional regularization techniques can be circumvented. Examples of application of the method to measured human data are presented. This first invasive validation of results compares well to previously published results obtained by using a standard approach. The method can provide additional constraints on, and thus improve, traditional methods aimed at solving the inverse problem of electrocardiography  相似文献   

16.
It is shown that various classifiers that are based on minimization of a regularized risk are universally consistent, i.e., they can asymptotically learn in every classification task. The role of the loss functions used in these algorithms is considered in detail. As an application of our general framework, several types of support vector machines (SVMs) as well as regularization networks are treated. Our methods combine techniques from stochastics, approximation theory, and functional analysis  相似文献   

17.
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.  相似文献   

18.
The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance.  相似文献   

19.
The inverse problem of electrocardiography is solved in order to reconstruct electrical events within the heart from information measured noninvasively on the body surface. These electrical events can be deduced from measured epicardial potentials; therefore, a noninvasive method of recovering epicardial potentials from body surface data is useful in clinical and experimental work. The ill-posed nature of this problem necessitates the use of regularization in the solution procedure. Inversion using Tikhonov zero-order regularization, a quasi-static method, had been employed previously and was able to reconstruct, with relatively good accuracy, important events in cardiac excitation (maxima, minima, etc.). Taking advantage of the fact that the process of cardiac excitation is continuous in time, one can incorporate information from the time progression of excitation in the regularization procedure using the Twomey technique. Methods of this type were tested on data obtained from a human-torso tank in which a beating canine heart was placed in the correct human anatomical position. The results show a marked improvement in the inverse solution when these temporal methods are used, and demonstrate that important physiological events (e.g., right ventricular breakthrough) not detected by the quasi-static approach, are reconstructed using these methods. In addition, the results indicate that as the time interval between sampled maps is reduced, the quality of the solutions that use this temporal regularization is greatly improved.  相似文献   

20.
Much research has shown that the definitions of within-class and between-class scatter matrices and regularization technique are the key components to design a feature extraction for small sample size problems. In this paper, we illustrate the importance of another key component, eigenvalue decomposition method, and a new regularization technique was proposed. In the hyperspectral image experiment, the effects of these three components of feature extraction are explored on ill-posed and poorly posed conditions. The experimental results show that different regularization methods need to cooperate with different eigenvalue decomposition methods to reach the best performance, the proposed regularization method, regularized feature extraction (RFE) outperform others, and the best feature extraction for a small sample size classification problem is RFE with nonparametric weighted scatter matrices  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号