首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
压缩感知中测量矩阵与重建算法的协同构造   总被引:2,自引:0,他引:2  
李佳  王强  沈毅  李波 《电子学报》2013,41(1):29-34
本文提出基于感知字典的迭代硬阈值(SDIHT)算法,以此协同构造压缩感知中测量矩阵与重建算法.将成对测量矩阵与感知字典分别用于压缩投影和构造重建算法,重建迭代至残差为零,从而精确恢复原始稀疏信号.本文证明了SDIHT算法精确恢复原始稀疏信号的充分条件.SDIHT算法的优点是重建精度高和计算复杂度低.仿真实验表明,当信号稀疏度或测量次数相同时,相比IHT、OMP和BIHT算法,SDIHT算法重建0-1稀疏信号和二维图像效果更好、算法效率更高.  相似文献   

2.
In the case of image reconstruction from missing projection data such as imaging an object with opaque obstructions, the conventional maximum entropy technique reconstructs a degraded image, which results from missing projection data. In this paper, we propose an improved maximum entropy technique based on a good estimation of missing projection data. Computer simulation using the proposed algorithm shows a significant improvement in image quality and convergence behavior over the conventional algorithms. This algorithm is also applied successfully to ultrasound attenuation CT (computed tomography) using a sponge phantom.  相似文献   

3.
针对PESA算法所需的计算运算量、计算难度及运算时间都随着解集数量的增加而急剧增加的问题,将熵值度量指标引入到PESA算法中,提出了基于信息熵的PESA算法(C-PESA, comentropy-based PESA)。该算法根据信息熵指标在量化度量Pareto解集的分布特性,判断种群进化是否到达成熟阶段,本算法迭代1 300次时即到达成熟阶段,从而尽早结束了算法复杂的优化过程,在一定程度上简化了PESA算法的时间复杂度。仿真结果表明,随着进化种群数量的增长,C-PESA算法的计算量只是呈现线性增加,算法的计算时间缩短接近4倍,进化计算效率得到提高。  相似文献   

4.
基于混沌粒子群优化的倒数熵阈值选取方法   总被引:2,自引:0,他引:2  
吴一全  占必超 《信号处理》2010,26(7):1044-1049
基于信息熵的方法是一类重要的阈值选取方法,但现有的最大熵方法存在无定义值问题。为此,提出了基于倒数熵的阈值选取方法。首先给出了倒数熵的定义及一维阈值选取方法,导出了基于二维直方图区域直分及区域斜分的倒数熵阈值选取算法公式;然后考虑到二维倒数熵分割运算量较大,提出利用混沌小生境粒子群算法来寻找最优阈值,避免了算法早熟,提高了搜索精度和算法效率。实验结果表明:二维倒数熵阈值选取的斜分方法在抗噪性和运算时间上优于直分方法;而与基于粒子群优化的二维最大熵方法相比,本文提出的基于混沌小生境粒子群优化的二维倒数熵斜分法在运行时间上降低了约40%,分割效果更佳。   相似文献   

5.
考虑到投影矩阵对压缩感知(CS)算法性能的影响,该文提出一种优化投影矩阵的算法。该方法提出可导的阈值函数,通过收缩Gram矩阵非对角元的方法压缩投影矩阵和稀疏字典的相关系数,引入基于沃尔夫条件(Wolfes conditions)的梯度下降法求解最佳投影矩阵,达到提高投影矩阵优化算法稳定度和重构信号精度的目的。通过基追踪(BP)算法和正交匹配追踪(OMP)算法求解l0优化问题,用压缩感知方法实现随机稀疏向量、小波测试信号和图像信号的感知和重构。仿真实验表明,该文提出的投影矩阵优化算法能较大地提高重构信号的精度。  相似文献   

6.
A generalized maximum entropy method coupled with Gerchberg-Saxton algorithm has been developed to extend the resolution from high-resolution TEM image(s) for weak objects. The Gerchberg-Saxton algorithm restores spatial resolution by operating real space and reciprocal space projections cyclically. In our methodology, a generalized maximum entropy method (Kullback-Leibler cross entropy) dealing with weak objects is used as a real space (P1) projection. After P1 projection, not only are the phases within the input spatial frequencies improved, but also the phases in the next higher frequencies are extrapolated. An example of semi-blind deconvolution (P1 project only) to improve the resolution in SiC twin boundary is shown. The nature of the bonding in this twin boundary is Si-C but it was rotated 180 degrees along the boundary normal. The optimum solution from P1 projection can be further improved by a P2 projection. The square roots of diffraction intensities from a diffraction pattern are then substituted to complete a cycle operation of the Gerchberg-Saxton algorithm. Application examples of Gerchberg-Saxton algorithm to solve the atomic structure of defects (2 x 1 interfacial reconstruction and dislocation) in NiSi2/Si interfaces will be shown also.  相似文献   

7.
Unlike MENT (maximum entropy algorithm), the extended MENT algorithm can process prior information and deal with incomplete projections or limited angle data. The reconstruction problem is formulated for solving linear systems involving the Fredholm integral equation. To develop the extended MENT algorithm, maximum entropy is substituted by a more general optimization criterion, that of minimizing the discriminatory function. The a priori knowledge of the shape of the object is easily incorporated in the algorithm by using the discriminatory function. Useful mathematical properties that make the discriminatory function attractive are derived. The sensitivity of the minimum discriminatory solution is derived to determine the characteristics of the noise in the reconstructed images. The extended MENT algorithm is developed for a parallel geometry, and its convergence properties are given. Its image processing performance is better than that for other maximum entropy algorithms such as multiplicative algebraic reconstruction techniques (MART) or more standard methods such as ART and the convolution backprojection  相似文献   

8.
针对某些信号带宽较宽导致难以直接采样的问题,压缩感知理论提供了一种可行的低速采样方法。信号在特定变换域中拥有稀疏表示,通过低速采样得到少量的投影值,已经包含了重构所需的重要信息。利用压缩感知理论从投影值中重构出稀疏向量,进而重建原信号。同时介绍一种基于非凸优化的压缩感知重构算法。相比L1范数的凸优化和无稀疏约束的L2范数,非凸优化的Lp范数拥有对稀疏性更强的约束。实验结果表明,使用压缩感知理论可以显著降低对信号的采样速率,而使用非凸优化算法可以取得更好的重构效果。  相似文献   

9.
Sparse signals can be reconstructed from far fewer samples than those that were required by the Shannon sampling theorem, if compressed sensing (CS) is employed. Traditionally, a random Gaussian (rGauss) matrix is used as a projection matrix in CS. Alternatively, optimization of the projection matrix is considered in this paper to enhance the quality of the reconstruction in CS. Bringing the multiplication of the projection matrix and the sparsifying basis to be near an equiangular tight frame (ETF) is a good idea proposed by some previous works. Here, a low-rank Gram matrix model is introduced to realize this idea. Also, an algorithm is presented via a computational method of the low-rank matrix nearness problem. Simulations show that the proposed method is better than some other methods in optimizing the projection matrix in terms of image denoising via sparse representation.  相似文献   

10.
练秋生  周婷 《电子学报》2012,40(7):1416-1422
如何以较少的观测值重构出高质量的图像是压缩成像系统的一个关键问题.本文根据图像块随机投影能量大小分布特点,提出了一种新的自适应采样方式以及针对自适应采样的有效重构算法.重构时利用了图像在字典下的稀疏表示原理和图像的非局部相似性先验知识.为实现图像的稀疏表示,文中构造了由多个方向字典和一个正交DCT字典组成的冗余字典,并用l1范数作为约束条件求解稀疏优化问题.由于充分利用了图像块的局部特性和图像的非局部特性,本文的压缩成像算法在低采样率下能重构出较高质量的图像.  相似文献   

11.
矩阵补全(MC)作为压缩感知(CS)的推广,已广泛应用于不同领域。近年来,基于黎曼优化的MC算法因重构精度高、计算速度快的特点,引起了广泛关注。针对基于黎曼优化的MC算法需假设原矩阵秩固定已知,且随机选择迭代起点的特点,该文提出一种基于自动秩估计的黎曼优化MC算法。该算法通过优化包含秩正则项的目标函数,迭代获取秩估计值和预重构矩阵。在估计所得秩对应的矩阵空间上以预重构矩阵为迭代起点,利用基于黎曼流形的共轭梯度法进行矩阵补全,从而提高重构精度。实验结果表明,与几种经典的图像补全方法相比,该文算法图像重构精度显著提高。  相似文献   

12.
Nonlinear image restoration is a complicated problem that is receiving increasing attention. Since every image formation system involves a built-in nonlinearity, nonlinear image restoration finds applications in a wide variety of research areas. Iterative algorithms have been well established in the corresponding linear restoration problem. In this paper, a generalized analysis regarding the convergence properties of nonlinear iterative algorithms is introduced. Moreover, the applications of the iterative Gauss-Newton (GN) algorithm in nonlinear image restoration are considered. The convergence properties of a general class of nonlinear iterative algorithms are rigorously studied through the Global Convergence Theorem (GCT). The derivation of the convergence properties is based on the eigen-analysis, rather than on the norm analysis. This approach offers a global picture of the evolution and the convergence properties of an iterative algorithm. Moreover, the generalized convergence-analysis introduced may be interpreted as a link towards the integration of minimization and projection algorithms. The iterative GN algorithm for the solution of the least-squares optimization problem is introduced. The computational complexity of this algorithm is enormous, making its implementation very difficult in practical applications. Structural modifications are introduced, which drastically reduce the computational complexity while preserving the convergence rate of the GN algorithm. With the structural modifications, the GN algorithm becomes particularly useful in nonlinear optimization problems. The convergence properties of the algorithms introduced are readily derived, on the basis of the generalized analysis through the GCT. The application of these algorithms on practical problems, is demonstrated through several examples.  相似文献   

13.
CT数据的获取过程和CT图像的重建过程与图形学的渲染过程极其相似,因此利用图形处理器(GPU)来加速CT重建算法成为了近年来CT研究的热点之一.本文根据单层螺旋CT数据的特点,构造了"平行-扇束"投影模式,实现了基于GPU的单层螺旋CT的三维图像重建算法.数值实验表明,与CPU上的分层重建相比重建速度提高10倍以上.  相似文献   

14.
To improve the performance of optical computed tomography (OpCT) reconstruction in the case of limited projection views, maximum entropy (ME) algorithms were proposed and can achieve better results than traditional ones. However, in the discrete iterative process of ME, the variables of the iterative function are continuous. Hence, interpolation methods ought to be used to improve the precision of the iterative function values. Here, a sinc function interpolation approach was adopted in ME algorithm (SINCME) and its reconstruction results for OpCT with limited views were studied using four typical phantoms. Compared results with ME without interpolation, traditional medical CT back-projection algorithm (BP), and iterative algorithm algebraic reconstruction technique (ART) showed that the SINCME algorithm achieved the best reconstruction results. In an experiment of emission spectral tomography reconstruction, SINCME was also adopted to calculate the three-dimensional distribution of physical parameters of a candle flame. The studies of both algorithm and experiment demonstrated that SINCME met the demand of limited-view OpCT reconstruction.  相似文献   

15.
周军妮 《电视技术》2012,36(23):160-164
当前用于复杂交通场景的车辆目标匹配算法主要是基于灰度相关的模板匹配算法,其缺点是受光线变化及噪声干扰影响较大。局部交叉熵算法在匹配精度上有明显提高,但运算量大,难以工程实现。因此,提出将投影变换和局部交叉熵相结合构成特征矢量用于目标匹配,其具有较强的抗光线变化能力,并且在一定的局部遮挡情况下,也具有较好的稳健性,在计算时间上也明显优于局部交叉熵算法。最后,通过实验仿真,验证了算法的有效性。  相似文献   

16.
A wavelet-based method for multiscale tomographic reconstruction   总被引:4,自引:0,他引:4  
The authors represent the standard ramp filter operator of the filtered-back-projection (FBP) reconstruction in different bases composed of Haar and Daubechies compactly supported wavelets. The resulting multiscale representation of the ramp-filter matrix operator is approximately diagonal. The accuracy of this diagonal approximation becomes better as wavelets with larger numbers of vanishing moments are used. This wavelet-based representation enables the authors to formulate a multiscale tomographic reconstruction technique in which the object is reconstructed at multiple scales or resolutions. A complete reconstruction is obtained by combining the reconstructions at different scales. The authors' multiscale reconstruction technique has the same computational complexity as the FBP reconstruction method. It differs from other multiscale reconstruction techniques in that (1) the object is defined through a one-dimensional multiscale transformation of the projection domain, and (2) the authors explicitly account for noise in the projection data by calculating maximum a posteriori probability (MAP) multiscale reconstruction estimates based on a chosen fractal prior on the multiscale object coefficients. The computational complexity of this maximum a posteriori probability (MAP) solution is also the same as that of the FBP reconstruction. This result is in contrast to commonly used methods of statistical regularization, which result in computationally intensive optimization algorithms.  相似文献   

17.
Three-dimensional reconstruction of vessels from digital X-ray angiographic images is a powerful technique that compensates for limitations in angiography. It can provide physicians with the ability to accurately inspect the complex arterial network and to quantitatively assess disease induced vascular alterations in three dimensions. In this paper, both the projection principle of single view angiography and mathematical modeling of two view angiographies are studied in detail. The movement of the table, which commonly occurs during clinical practice, complicates the reconstruction process. On the basis of the pinhole camera model and existing optimization methods, an algorithm is developed for 3-D reconstruction of coronary arteries from two uncalibrated monoplane angiographic images. A simple and effective perspective projection model is proposed for the 3-D reconstruction of coronary arteries. A nonlinear optimization method is employed for refinement of the 3-D structure of the vessel skeletons, which takes the influence of table movement into consideration. An accurate model is suggested for the calculation of contour points of the vascular surface, which fully utilizes the information in the two projections. In our experiments with phantom and patient angiograms, the vessel centerlines are reconstructed in 3-D space with a mean positional accuracy of 0.665 mm and with a mean back projection error of 0.259 mm. This shows that the algorithm put forward in this paper is very effective and robust.  相似文献   

18.
本文提出一种采用Hopfiele神经网络(Hopfield Neiral Network简称HNN)优化的图象重建算法。将图象重建问题转化为HNN优化问题,取重建图象的峰值函数最小以及原始投影与再投影之间的误差平方和最小作为图象重建的优化目标,作为能量函数构造连续型HNN模型,由HNN能量函数极小化可得到重建问题的优化解。这种方法具有简单、计算量小、收敛快、便于并行计算等特点。对照ART算法,用计算机模拟产生的无噪声投影数据检验新算法,验证了新算法的优越性。  相似文献   

19.
Most present day computerized tomography (CT) systems are based on reconstruction algorithms that produce only approximate deterministic solutions of the image reconstruction problem. These algorithms yield reasonable results in cases of low measurement noise and regular measurement geometry, and are considered acceptable because they require far less computation and storage than more powerful algorithms that can yield near optimal results. However, the special geometry of the CT image reconstruction problem can be used to reduce by orders of magnitude the computation required for optimal reconstruction methods, such as the minimum variance estimator. These simplifications can make the minimum variance technique very competitive with well-known approximate techniques such as the algebraic reconstruction technique (ART) and convolution-back projection. The general minimum variance estimator for CT is first presented, and then a fast algorithm is described that uses Fourier transform techniques to implement the estimator for either fan beam or parallel beam geometries. The computational requirements of these estimators are examined and compared to other techniques. To allow further comparison with the commonly used convolution-back projection method, a representation of the fast algorithm is derived which allows its equivalent convolving function to be examined. Several examples are presented.  相似文献   

20.
基于遗传算法的最大熵阈值的图像分割   总被引:11,自引:0,他引:11  
图像阈值分割技术在图像分析和图像识别中具有重要的意义.最大熵方法具有很多优点,但同时也存在弱点:需要大量的运算时间,特别是在计算多阈值时.因此需要引入优化算法.文中将遗传算法用于最大熵阈值的图像分割方法中,分别对一维及二维阈值分割的情况进行讨论,并提出了一种基于改进型遗传算法的最大熵阈值图像分割方法. 通过对几幅经典图像的分割结果对比,表明了基于遗传算法的最大熵阈值的图像分割方法可以有效地提高最大熵图像分割的计算速度,提高图像处理的实时性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号