首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
针对大规模集成电路领域CT重建图像的特点,提出TV约束条件下采用l1范数作正则项的重建模型,并给出了基于Bregman迭代的模型求解算法.算法分为两步: 1)采用Bregman迭代求解图像的l1范数作为正则项,误差的加权l2范数作为保真项的约束极值问题;2) 采用TV约束对1)中得到的重建图像进行修正.算法对TV约束条件下采用l1作正则项的重建模型分开求解,降低了算法的复杂度,加快了收敛速度.算法在稀疏投影数据下可以快速重建CT图像且质量较好.本文采用经典的Shepp-Logan图像进行仿真实验并对实际得到的电路板投影数据进行重建,结果表明该算法可满足重建质量要求且重建速度有较大提升.  相似文献   

2.
徐敏达  李志华 《计算机科学》2018,45(12):210-216
针对不完全投影数据图像重建中出现伪影和噪点的问题,提出了L1与TV同时进行正则化的图像重建模型。基于该重建模型,通过将Bregman迭代和TV软阈值滤波相结合,进一步提出了一种图像重建算法。该算法首先将投影数据通过优化的Bregman迭代算法进行初步重建,然后使用TV软阈值滤波对改造的全变分模型进行二次重建,最后判断是否满足设定的收敛阈值,若满足则结束重建,输出重建图像,否则重复进行上述两步操作,直至迭代完成。实验采用不添加噪声的Shepp-Logan模型与添加噪声的Abdomen模型来验证算法的有效性,证明了所提出的算法在视觉上均优于ART,LSQR,LSQT-STF,BTV等典型的图像重建算法,同时通过多项评价指标对比表明所提出的算法有明显优势。实验结果表明,所提算法在图像重建中能够有效去除条形伪影并保护图像细节,同时具有良好的抗噪性。  相似文献   

3.
传统的总变差(TV)最小算法是一种基于压缩感知(CS)的经典迭代重建算法,可以从稀疏数据或含噪数据中高精度地重建图像。然而,TV算法在重建分段常数特征不明显的图像时可能会引入块状伪影,通过研究得出,在图像去噪中使用高阶总变差(HOTV)能有效压制TV模型引入的块状伪影。鉴于此,提出了一种HOTV图像重建模型及其Chambolle-Pock(CP)求解算法。具体来说,以二阶梯度构建二阶TV范数,进而设计了一种数据保真约束的二阶TV最小重建模型,并推导出了相应的CP算法。在理想数据投影和含噪数据投影条件下,分别采用基于波浪背景的Shepp-Logan模体、灰度渐变模体以及真实CT图像模体进行重建实验,并进行定性和定量分析。理想数据投影的重建结果表明,和传统TV算法相比,HOTV算法能有效压制块状伪影并提高重建精度。含噪数据投影的重建结果表明,HOTV算法和TV算法均有良好的抗噪能力,但HOTV算法的保边性能更好且抗噪性更强。在重建分段常数特征不明显而灰度波动特征明显的图像时,HOTV算法是一种比TV算法更优的重建算法。所提HOTV算法可以被推广到各种扫描模式下的CT重建及其他成像模态中。  相似文献   

4.
为解决由于过度的量子噪声使低剂量CT重建图像质量产生退化的问题,提出一种基于分区域处理的联合先验低剂量CT统计迭代重建算法。对重建过程中的图像进行区域划分,对图像进行中值滤波并计算滤波图像的梯度,根据梯度划分出图像的边缘区域和平坦区域,分别利用全变分正则化(TV)和高斯马尔可夫随机场(MRF)正则化对不同的区域进行惩罚,将这两种正则项作为联合先验应用到惩罚加权最小二乘重建算法中,使用超松弛迭代算法(SOR)对目标函数进行求解。仿真结果表明,该算法去噪能力强,能有效保护重建图像的边缘细节信息。  相似文献   

5.
优化加权TV的复合正则化压缩感知图像重建   总被引:1,自引:1,他引:0       下载免费PDF全文
目的:压缩感知理论突破了传统的Shanon-Nyquist采样定理的限制,能够以较少的采样值来进行原信号的恢复。针对压缩感知图像重建问题,本文提出了一种基于优化加权全变差(Total Variation, TV)的复合正则化压缩感知图像重建模型。方法:提出的重建模型是以TV正则化模型为基础的。首先,为克服传统TV正则化会导致重建图像的边缘和纹理细节部分模糊或丢失的缺点,本文引入图像的梯度信息估计权重,构建加权TV的重建模型。其次,利用全变差去噪(Rudin–Osher–Fatemi,ROF)模型对权重进行优化估计,从而减少计算权重时受噪声的影响。再次,本文将非局部结构相似性先验和局部自回归性先验引入提出的加权TV模型,得到优化加权TV的复合正则化重建模型。最后,结合投影法和算子分裂法对优化模型求解。结果:针对自然图像的不同特性,本文使用复合正则化先验进行建模,实验表明上述重建问题通过我们的方法得到了很好的解决,加权TV正则化先验使得图像的平坦区域和强边重建较好,而非局部结构相似性先验和局部自回归性先验能够保证图像的精细结构部分的重建效果。结论:本文提出了一种新的复合正则化压缩感知重建模型。与其它基于TV正则化的重建模型相比,实验结果表明本文模型的重建性能无论是在视觉效果还是在客观评价指标上都有明显的提高。  相似文献   

6.
针对目前大多数的低秩张量填充模型存在稀疏过约束而导致恢复数据的细微特征被忽略的现象,本文借助低秩矩阵分解和框架变换,引入软阈值算子的■范数正则项,提出一个基于近似稀疏正则化的低秩张量填充模型.为有效地求解该模型,我们将■范数改写为具有非线性不连续权函数的加权■范数,并用连续权函数逼近不连续权函数,在此基础上设计块逐次上界极小化的求解算法.在一定条件下,证明该算法的收敛性.大量实验表明,本文所提出的算法比现有一些经典算法能更好地重建得到图像的局部细节特征.  相似文献   

7.
针对计算机断层成像(CT)系统中,全变分(TV)迭代约束模型易于产生阶梯效应以及不能很好地保存图像中精细结构的问题,提出一种自适应步长的非局部全变分(NLTV)约束迭代重建算法。考虑到NLTV模型能较好保存和恢复图像细节以及纹理的特点,首先将CT模型当成在满足投影数据的保真项的解集中寻找满足特定正则项即NLTV最小化的解约束优化模型;然后,使用代数重建(ART)算法和分离布雷格曼(SB)来确保重建结果满足数据保真项和正则化项的约束;最后,以自适应最速下降-投影到凸集(ASD-POCS)算法作为基础迭代框架来重建图像。实验结果表明,在不含噪声的稀疏重建条件下,提出的算法使用30个角度的投影数据已经可以重建出理想的结果。在含噪稀疏数据重建实验中,该算法在30次迭代时已得到接近最终收敛的结果,且均方根误差(RMSE)是ASD-POCS算法的2.5倍。该重建算法能在稀疏投影数据下重建出精确的结果图像,同时改善了TV迭代模型的细节重建能力,且对噪声有一定的抑制作用。  相似文献   

8.
多成分正则化约束的断层图像重建算法   总被引:1,自引:0,他引:1       下载免费PDF全文
压缩感知理论为低采样率下稀疏性信号的高质量重建提供理论基础和广泛的应用前景,然而现有大部分研究通常假设原始图像是分段光滑的,不适用于纹理丰富的图像。基于多成分分析理论将原始图像分为卡通和纹理两部分,并且利用各成分在不同变换下稀疏的性质,提出一种多成分正则化的断层图像重建模型。采用分裂Bregman方法将多正则项解耦合,分解成最小二乘问题和去噪问题求解,提出该模型的交替迭代算法。对MRI和CT图像进行仿真实验,并同新近的多种算法进行比较。实验结果表明,该模型能够较好地保持图像纹理等中小尺度信息,收敛速度快。  相似文献   

9.
压缩感知被广泛应用于信号恢复和图像重构与去噪,重构算法是压缩感知的关键部分之一。当采样率很低时,重建原始信号是个困难的问题。对此,现有算法普遍表现不佳。采用[p(0相似文献   

10.
由于传统超分辨率重建算法在模糊和噪声严重的情况下不能有效地抑制图像中的噪声,提出基于迭代重加权范数的广义总变分超分辨率重建算法。该算法采用迭代重加权的数据保真项和正则项构造广义总变分的代价函数,并采用预处理共轭梯度法对其进行优化,能够有效地抑制噪声的产生。实验证明,该算法在去除噪声的同时,能够很好地保持图像的细节信息,有很好的视觉效果。  相似文献   

11.
王恒  郑笔耕 《测控技术》2016,35(1):38-42
为了消除当前图像重构算法存在的振铃效应,避免过度平滑图像纹理区域,使其兼顾较好的细小边缘保持与丰富纹理,以获取较高的重构图像视觉质量,提出了基于加权TV(total variation)/SAR(simutanneous auto-regression)联合先验与最小线性KL散度凸组合的图像重构算法.引入权重因子,从退化图像中提取出非局部SSIM约束,联合TV函数,设计加权TV图像正则先验,增强稀疏性;根据SAR先验与加权TV正则先验,获取重构图像的联合后验分布;再建立最小线性KL散度函数凸组合,并引入最优最小化技术,求解后验分布,完成贝叶斯推理.并研究了本文算法在不同退化程度下的用户响应.测试结果显示:与当前图像重构技术相比,本文算法的复原效果较为理想;在图像受损严重时,本文算法更受用户欢迎.  相似文献   

12.
This paper addresses single-machine scheduling problems under the consideration of learning effect and resource allocation in a group technology environment. In the proposed model of this paper the actual processing times of jobs depend on the job position, the group position, and the amount of resource allocated to them concurrently. Learning effect and two resource allocation functions are examined for minimizing the weighted sum of makespan and total resource cost, and the weighted sum of total completion time and total resource cost. We show that the problems for minimizing the weighted sum of makespan and total resource cost remain polynomially solvable. We also prove that the problems for minimizing the weighted sum of total completion time and total resource cost have polynomial solutions under certain conditions.  相似文献   

13.
Nonlinear integrals play an important role in information fusion. So far, all existing nonlinear integrals of a function with respect to a set function are defined on a subset of a space. In many of the problems with information fusion, such as decision tree generation in inductive learning, we often need to deal with the function defined on a partition of the space. Motivated by minimizing the classification information entropy of a partition while generating decision trees, this paper proposes a nonlinear integral of a function with respect to a nonnegative set function on a partition, and provides the conclusion that the sum of the weighted entropy of the union of several subsets is not less than the sum of the weighted entropy of a single subset. It is shown that selecting the entropy of a single attribute is better than selecting the entropy of the union of several attributes in generating rules by decision trees.  相似文献   

14.
Scheduling with two competing agents on a single machine has become a popular research topic in recent years. Most research focuses on minimizing the objective function of one agent, subject to the objective function of the other agent does not exceed a given limit. In this paper we adopt a weighted combination approach to treat the two-agent single-machine scheduling problem. The objective that we seek to minimize is the weighted sum of the total completion time of the jobs of one agent and the total tardiness of the jobs of the other agent. We provide two branch-and-bound algorithms to solve the problem. In addition, we present a simulated annealing and two genetic algorithms to obtain near-optimal solutions. We report the results of the computational experiments conducted to test the performance of the proposed algorithms.  相似文献   

15.
How to efficiently and fairly allocate data rate among different users is a key problem in the field of multiuser multimedia communication. However, most of the existing optimization-based methods, such as minimizing the weighted sum of the distortions or maximizing the weighted sum of the peak signal-to-noise ratios (PSNRs), have their weights heuristically determined. Moreover, those approaches mainly focus on the efficiency issue while there is no notion of fairness. In this paper, we address this problem by proposing a game-theoretic framework, in which the utility/payoff function of each user/player is jointly determined by the characteristics of the transmitted video sequence and the allocated bit-rate. We show that a unique Nash equilibrium (NE), which is proportionally fair in terms of both utility and PSNR, can be obtained, according to which the controller can efficiently and fairly allocate the available network bandwidth to the users. Moreover, we propose a distributed cheat-proof rate allocation scheme for the users to converge to the optimal NE using alternative ascending clock auction. We also show that the traditional optimization-based approach that maximizes the weighted sum of the PSNRs is a special case of the game-theoretic framework with the utility function defined as an exponential function of PSNR. Finally, we show several experimental results on real video data to demonstrate the efficiency and effectiveness of the proposed method.  相似文献   

16.
We consider various single machine scheduling problems in which the processing time of a job depends either on its position in a processing sequence or on its start time. We focus on problems of minimizing the makespan or the sum of (weighted) completion times of the jobs. In many situations we show that the objective function is priority-generating, and therefore the corresponding scheduling problem under series-parallel precedence constraints is polynomially solvable. In other situations we provide counter-examples that show that the objective function is not priority-generating.  相似文献   

17.
In this paper, we study the problem of minimizing the weighted sum of makespan and total completion time in a permutation flowshop where the processing times are supposed to vary according to learning effects. The processing time of a job is a function of the sum of the logarithms of the processing times of the jobs already processed and its position in the sequence. We present heuristic algorithms, which are modified from the optimal schedules for the corresponding single machine scheduling problem and analyze their worst-case error bound. We also adopt an existing algorithm as well as a branch-and-bound algorithm for the general m-machine permutation flowshop problem. For evaluation of the performance of the algorithms, computational experiments are performed on randomly generated test problems.  相似文献   

18.
This paper presents a new nonlinear multi-objective mathematical model for a single-machine scheduling problem with three objectives: (1) minimizing the sum of the weighted jobs completion, (2) minimizing the sum of the weighted delay times, and (3) maximizing the sum of the job values in makespan. In addition, a number of constraints are incorporated in this presented model, such as repairing and maintenance periods, deterioration of jobs, and learning effect of the work process. Since this type of scheduling problem belongs to a class of NP-hard ones, its solution by common software packages is almost impossible, or at best very time consuming. Thus, a meta-heuristic algorithm based on simulated annealing (SA) is proposed to solve such a hard problem. At a final stage, the related results obtained by the proposed SA are compared with those results reported by the Lingo 8 software in order to demonstrate the efficiency and capability of our proposed SA algorithm.  相似文献   

19.
In this paper we give a tight bound on the average sensitivity of the weighted sum function. This confirms a conjecture of Shparlinski. The weights of the weighted sum functions are also given and it shows that they are all asymptotically balanced.  相似文献   

20.
In this paper, we describe a mathematical framework to determine the weighted functions in variable weight combined forecasting (VWCF) problems with continuous variable weights. Due to the polynomial approximation theorem and matrix analysis, the general formula of the variable weighted functions wi(t) in the VWCF problems is obtained. We put forward the optimal weighted matrix and get the optimal weights by minimizing errors square sum J at any given times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号