首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Piecewise continuous reconstruction of real-valued data can be formulated in terms of nonconvex optimization problems. Both stochastic and deterministic algorithms have been devised to solve them. The simplest such reconstruction process is the weak string. Exact solutions can be obtained for it and are used to determine the success or failure of the algorithms under precisely controlled conditions. It is concluded that the deterministic algorithm (graduated nonconvexity) outstrips stochastic (simulated annealing) algorithms both in computational efficiency and in problem-solving power  相似文献   

2.
稀疏重构算法中凸松弛法在恢复效率方面、贪婪追踪法在恢复精度方面存在不足,基于遗传算法迭代优化的思想,结合模拟退火以及多种群算法的优势,提出了基于模拟退火遗传算法和基于多种群遗传算法的启发式稀疏重构算法。所提算法均从传统遗传算法易陷入局部最优解的缺陷出发,分别通过保持个体间的差异性和提高种群多样性来搜索待求稀疏信号的全局最优解,并通过理论分析证明了所提算法参数选取及搜索策略的有效性。此外,以阵列信号处理中空间信源的波达方向(DOA)估计问题为例,验证所提算法的有效性。仿真结果表明,相较于正交匹配追踪OMP算法和基于l1范数奇异值分解的l1-SVD算法,所提算法提高了DOA估计的精度,且降低了运算复杂度,使其快速收敛至全局最优解。  相似文献   

3.
提出了一种基于多层网格(MG)和广义极小残余(GMRES)算法相结合的图像超分辨率重建快速算法.首先采用正则化方法给出图像超分辨率重建模型;然后在系统介绍MG和GMRES算法的基础上,针对图像超分辨率重建中非对称线性稀疏方程的求解,提出多层网格-广义极小残余(MG-GMRES)算法;详细讨论了MG-GMRES算法的光滑、限制、插值操作以及计算复杂度.实验研究表明该算法的重建结果相当有效,与MG、GMRES和Richrdson迭代相比,具有更快的收敛速度.  相似文献   

4.
将常用于CT图像重建的滤波反投影算法程序设计成能够运行在大数据框架Spark中的并行模式,以此来提高计算效率并实现批量图像的重建,缩短图像重建时间。基于分布式计算框架Spark,利用其图像处理工具Thunder,将滤波反投影算法在图像重建过程中设计成并行程序模式,实现图像的片间并行重建。实验结果表明,随着Spark集群规模的不断扩大,在确保重建图像质量的前提下,重建一定数量的CT图像相比单机模式下时间显著缩短,并行滤波反投影算法具有完全加速比,并行效率趋近于1。基于Spark集群实现的滤波反投影算法能够显著提升CT图像重建速度,并实现大量图像并行重建,可扩展其他的CT图像重建算法,对远程医学图像重建平台的建设具有重要参考意义。  相似文献   

5.
基于自适应迭代松弛的立体点对匹配鲁棒算法   总被引:1,自引:0,他引:1       下载免费PDF全文
图像匹配是立体视觉的重要部分,也是双目立体测量系统必须解决和最难解决的问题。为了对图像进行鲁棒性匹配,提出了一种基于自适应迭代松弛的立体点对匹配方法。该方法首先利用视差梯度约束来构造匹配支持度函数;然后通过松弛方法优化该函数来完成立体点对的匹配。由于利用了动态更新松弛匹配过程参数的方法,因此有效地降低了误匹配率和误剔除率。在此基础上还提出了对松弛过程结束后的匹配结果,再次使用视差梯度约束来进行进一步检验的策略,该策略能够以一定幅度的误剔除率提升为代价,大幅度降低了误匹配率,从而可满足许多要求严格限制误匹配率的应用。实验结果证明,该新算法是有效的,并已经用于一个双目立体测量原型系统当中。  相似文献   

6.
Image flow is the velocity field in the image plane caused by the motion of the observer, objects in the scene, or apparent motion, and can contain discontinuities due to object occlusion in the scene. An algorithm that can estimate the image flow velocity field when there are discontinuities due to occlusions is described. The constraint line clustering algorithm uses a statistical test to estimate the image flow velocity field in the presence of step discontinuities in the image irradiance or velocity field. Particular emphasis is placed on motion estimation and segmentation in situations such as random dot patterns where motion is the only cue to segmentation. Experimental results on a demanding synthetic test case and a real image are presented. A smoothing algorithm for improving the velocity field estimate is also described. The smoothing algorithm constructs a smooth estimate of the velocity field by approximating a surface between step discontinuities. It is noted that the velocity field estimate can be improved using surface reconstruction between velocity field boundaries  相似文献   

7.
In this paper, we propose a general form of TV-Stokes models and provide an efficient and fast numerical algorithm based on the augmented Lagrangian method. The proposed model and numerical algorithm can be used for a number of applications such as image inpainting, image decomposition, surface reconstruction from sparse gradient, direction denoising, and image denoising. Comparing with properties of different norms in regularity term and fidelity term, various results are investigated in applications. We numerically show that the proposed model recovers jump discontinuities of a data and discontinuities of the data gradient while reducing stair-case effect.  相似文献   

8.
In recent years, nonnegative matrix factorization (NMF) has attracted significant amount of attentions in image processing, text mining, speech processing and related fields. Although NMF has been applied in several application successfully, its simple application on image processing has a few caveats. For example, NMF costs considerable computational resources when performing on large databases. In this paper, we propose two enhanced NMF algorithms for image processing to save the computational costs. One is modified rank-one residue iteration (MRRI) algorithm , the other is element-wisely residue iteration (ERI) algorithm. Here we combine CAPG (a NMF algorithm proposed by Lin), MRRI and ERI with two-dimensional nonnegative matrix factorization (2DNMF) for image processing. The main difference between NMF and 2DNMF is that the former first aligns images into one-dimensional (1D) vectors and then represents them with a set of 1D bases, while the latter regards images as 2D matrices and represents them with a set of 2D bases. The three combined algorithms are named CAPG-2DNMF, MRRI-2DNMF and ERI-2DNMF. The computational complexity and convergence analyses of proposed algorithms are also presented in this paper. Three public databases are used to test the three NMF algorithms and the three combinations, the results of which show the enhancement performance of our proposed algorithms (MRRI and ERI algorithms) over the CAPG algorithm. MRRI and ERI have similar performance. The three combined algorithms have better image reconstruction quality and less running time than their corresponding 1DNMF algorithms under the same compression ratio. We also do some experiments on a real-captured image database and get similar conclusions.  相似文献   

9.
Stereo correspondence by surface reconstruction   总被引:1,自引:0,他引:1  
An algorithm that solves the computational stereo correspondence and the surface reconstruction is presented. The algorithm integrates the reconstruction process in the correspondence analysis by means of multipass attribute matching and disparity refinement. In the matching process, the requirement of attribute similarity is relaxed with the pass number while the requirement for agreement between the predicted and the measured disparity is tightened. Disparity discontinuities and occluded areas are detected by analyzing the partial derivatives of the reconstructed disparity surface. Results on synthetic and on real stereo image pairs are reported  相似文献   

10.
Super-resolution reconstruction of image sequences   总被引:17,自引:0,他引:17  
In an earlier work (1999), we introduced the problem of reconstructing a super-resolution image sequence from a given low resolution sequence. We proposed two iterative algorithms, the R-SD and the R-LMS, to generate the desired image sequence. These algorithms assume the knowledge of the blur, the down-sampling, the sequences motion, and the measurements noise characteristics, and apply a sequential reconstruction process. It has been shown that the computational complexity of these two algorithms makes both of them practically applicable. In this paper, we rederive these algorithms as approximations of the Kalman filter and then carry out a thorough analysis of their performance. For each algorithm, we calculate a bound on its deviation from the Kalman filter performance. We also show that the propagated information matrix within the R-SD algorithm remains sparse in time, thus ensuring the applicability of this algorithm. To support these analytical results we present some computer simulations on synthetic sequences, which also show the computational feasibility of these algorithms  相似文献   

11.
Electron microscope tomography has emerged as the leading technique for structure determination of cellular components with a resolution of a few nanometers, opening up exciting perspectives for visualizing the molecular architecture of the cytoplasm. This work describes and analyzes the parallelization of tomographic reconstruction algorithms for their application in electron microscope tomography of cellular structures. Efficient iterative algorithms that are characterized by a fast convergence rate have been used to tackle the image reconstruction problem. The use of smooth basis functions provides the reconstruction algorithms with an implicit regularization mechanism, very appropriate for highly noisy conditions such as those present in high-resolution electron tomographic studies. Parallel computing techniques have been applied so as to face the computational requirements demanded by the reconstruction of large volumes. An efficient domain decomposition scheme has been devised that leads to a parallel approach with capabilities of interprocessor communication latency hiding. The combination of efficient iterative algorithms and parallel computing techniques have proved to be well suited for the reconstruction of large biological specimens in electron tomography, yielding solutions in reasonable computational times. This work concludes that parallel computing will be the key to afford high-resolution structure determination of cells, so that the location of molecular signatures in their native cellular context can be made a reality.  相似文献   

12.
Image analysis using multigrid relaxation methods   总被引:2,自引:0,他引:2  
Image analysis problems, posed mathematically as variational principles or as partial differential equations, are amenable to numerical solution by relaxation algorithms that are local, iterative, and often parallel. Although they are well suited structurally for implementation on massively parallel, locally interconnected computational architectures, such distributed algorithms are seriously handi capped by an inherent inefficiency at propagating constraints between widely separated processing elements. Hence, they converge extremely slowly when confronted by the large representations of early vision. Application of multigrid methods can overcome this drawback, as we showed in previous work on 3-D surface reconstruction. In this paper, we develop multiresolution iterative algorithms for computing lightness, shape-from-shading, and optical flow, and we examine the efficiency of these algorithms using synthetic image inputs. The multigrid methodology that we describe is broadly applicable in early vision. Notably, it is an appealing strategy to use in conjunction with regularization analysis for the efficient solution of a wide range of ill-posed image analysis problems.  相似文献   

13.
Algorithms used to reconstruct single photon emission computed tomography (SPECT) data are based on one of two principles: filtered back projection or iterative methods. In this paper, an evolution strategy (ES) was applied to reconstruct transaxial slices of SPECT data. Evolutionary algorithms are stochastic global search methods that have been used successfully for many kinds of optimization problems. The newly developed reconstruction algorithm consisting of /spl mu/ parents and /spl lambda/ children uses a random principle to readjust the voxel values, whereas other iterative reconstruction methods use the difference between measured and simulated projection data. The (/spl mu/ + /spl lambda/)-ES was validated against a test image, a heart, and a Jaszczak phantom. The resulting transaxial slices show an improvement in image quality, in comparison to both the filtered back projection method and a standard iterative reconstruction algorithm.  相似文献   

14.
在空间数据不确定性重建领域,多点统计法(MPS)得到了广泛的应用,但由于计算成本较高,其适用性受到了影响。通过使用金字塔结构的全卷积生成对抗网络(GAN)模型学习不同分辨率的训练图像,提出了一种基于多分辨率GAN模型的空间数据重建方法。该方法从高分辨率训练图像中捕获细节特征,从低分辨率训练图像中捕获大范围特征,因此该方法重建的图像包含训练图像的全局和局部结构信息,同时保持一定的随机性。把所提算法与MPS中的代表性算法以及应用于空间数据重建的GAN方法进行对比的结果表明,所提方法10次重建的总时间降低了约1 h,其平均孔隙度与训练图像孔隙度的差值降低至0.000 2,并且其变差函数曲线和多点连接性函数(MPC)曲线更接近于训练图像,可见所提算法重建质量更好。  相似文献   

15.
王玥  周城  熊承义  舒振宇 《计算机科学》2016,43(2):307-310, 315
图像分块压缩感知重构模型通过分块方式解决了压缩感知中观测矩阵过大带来的计算复杂度较高和存储空间较大的问题,但分块重构时会产生块效应,其需要通过去块效应滤波加以消除。现有的滤波方法并未考虑图像纹理细节恢复问题,造成了重构质量的降低。为解决该问题,首先提出了一种基于灰度熵的纹理自适应采样方法。随后分析了分块压缩感知中块效应的产生和经自适应采样后块效应得到缓解的原因,并将全变分滤波引入到图像分块压缩感知平滑投影迭代重构过程之中,提出了一种基于图像分块纹理信息的双树离散小波硬阈值滤波和全变分滤波的自适应加权滤波模型,用其取代原平滑投影迭代算法的滤波过程,在自适应采样缓解块效应的基础上,更有效地保存图像的细节信息。仿真实验表明,与多种已有方案相比,该方案可显著提升重建图像的主客观质量,同时可有效保留图像的纹理细节。  相似文献   

16.
基于特征空间的人脸超分辨率重构   总被引:2,自引:0,他引:2  
张地  何家忠 《自动化学报》2012,38(7):1145-1152
超分辨率图像重构是利用关于同一场景的多帧低分辨率图像重构出一幅具有更高分辨率图像的过程.传统的超分辨率图像重构算法是基于像素空间,通过利用高、低分辨率像素空间之间的映射关系来求解,具有计算复杂性高等缺点. 针对低分辨率人脸放大问题,提出了一个基于特征空间的人脸超分辨率图像重构算法.与传统算法相比,该算法不仅降低了计算复杂性,还具有更好的鲁棒性.  相似文献   

17.
基于非局部相似模型的压缩感知图像恢复算法   总被引:2,自引:0,他引:2  
针对压缩感知(Compressed sensing, CS)图像恢复问题, 提出了一种基于非局部相似模型的压缩感知恢复算法, 该算法将传统意义上二维图像块的稀疏性扩展到相似图像块组在三维空间上的稀疏性, 在提高图像表示稀疏度的同时进一步提高了压缩感知图像恢复效率, 恢复图像在纹理和结构保持方面都得到了很大的提升. 在该算法模型求解过程中, 使用增广拉格朗日方法将受限优化问题转换为非受限优化问题, 为减少计算复杂度, 还使用了基于泰勒展开的线性化技术来加速算法求解. 实验结果表明, 该算法的图像恢复性能优于目前主流的压缩感知图像恢复算法.  相似文献   

18.
Omnidirectional cameras are useful in applications requiring rapid capture of image data representing the complete local environment. Feature detection from such image data is thus a prominent research issue. Transforming an omnidirectional image to a panoramic image may result in a sparse panoramic image with missing image data. Whilst image reconstruction techniques have been developed that enable the subsequent use of standard image processing algorithms, the development of image processing algorithms that can be applied directly to sparse image data has received less attention. We address the problem of corner point detection for sparse panoramic images by developing an algorithmic approach that can be applied directly to sparse unwarped omnidirectional images without the requirement of image reconstruction, and we illustrate the accurate performance of the algorithm through visual results and receiver operating characteristic curves.  相似文献   

19.
目的 深度相机能够对场景的深度信息进行实时动态捕捉,但捕获的深度图像分辨率低且容易形成空洞。利用高分辨率彩色图像作为引导,是深度图超分辨率重建的重要方式。现有方法对彩色边缘与深度不连续区域的不一致性问题难以有效解决,在深度图超分辨率重建中引入了纹理复制伪影。针对这一问题,本文提出了一种鲁棒的彩色图像引导的深度图超分辨率重建算法。方法 首先,利用彩色图像边缘与深度图像边缘的结构相关性,提出RGB-D结构相似性度量,检测彩色图像与深度图像共有的边缘不连续区域,并利用RGB-D结构相似性度量自适应选取估计像素点邻域的最优图像块。接着,通过提出的定向非局部均值权重,在图像块区域内建立多边引导下的深度估计,解决彩色边缘和深度不连续区域的结构不一致性。最后,利用RGB-D结构相似性度量与图像平滑性之间的对应关系,对多边引导权重的参数进行自适应调节,实现鲁棒的深度图超分辨率重建。结果 在Middlebury合成数据集、ToF和Kinect数据集以及本文自建数据集上的实验结果表明,相比其他先进方法,本文方法能够有效抑制纹理复制伪影。在Middlebury、ToF和Kinect数据集上,本文方法相较于次优算法,平均绝对偏差平均降低约63.51%、39.47 %和7.04 %。结论 对于合成数据集以及真实场景的深度数据集,本文方法均能有效处理存在于彩色边缘和深度不连续区域的不一致性问题,更好地保留深度边缘的不连续性。  相似文献   

20.
Computing a distance map (distance transform) is an operation that converts a 2D image consisting of black and white pixels to an image where each pixel has a value or a pair of coordinates that represents the distance to or location of the nearest black pixel. It is a basic operation in image processing and computer vision fields, and is used for expanding, shrinking, thinning, segmentation, clustering, computing shape, object reconstruction, etc. This paper examines the possibility of implementing the problem of finding a distance map for an image efficiently using an optical bus. The computational model considered is the linear array with a reconfigurable pipelined bus system (LARPBS), which has been introduced recently based on current electronic and optical technologies. It is shown that the problem for an n /spl times/ n image can be implemented in O(log n log log n) bus cycles deterministically or in O(log n) bus cycles with high probability on an LARPBS with n/sup 2/ processors. We also show that the problem can be solved in O(log log n) bus cycles deterministically or in O(l) bus cycles with high probability on an LARPBS with n/sup 3/ processors. Scalability of the algorithms is also discussed briefly. The algorithm compares favorably to the best known parallel algorithms for the same problem in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号