首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, for the reconstruction of the positron emission tomography (PET) images, an iterative MAP algorithm was instigated with its adaptive neurofuzzy inference system based image segmentation techniques which we call adaptive neurofuzzy inference system based expectation maximization algorithm (ANFIS‐EM). This expectation maximization (EM) algorithm provides better image quality when compared with other traditional methodologies. The efficient result can be obtained using ANFIS‐EM algorithm. Unlike any usual EM algorithm, the predicted method that we call ANFIS‐EM minimizes the EM objective function using maximum a posteriori (MAP) method. In proposed method, the ANFIS‐EM algorithm was instigated by neural network based segmentation process in the image reconstruction. By the image quality parameter of PSNR value, the adaptive neurofuzzy based MAP algorithm and de‐noising algorithm compared and the PET input image is reconstructed and simulated in MATLAB/simulink package. Thus ANFIS‐EM algorithm provides 40% better peak signal to noise ratio (PSNR) when compared with MAP algorithm. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 1–6, 2015  相似文献   

2.
In this work, a hybrid continuous genetic algorithm (HCGA) based methodology has been developed for optimization of number of projections for parallel‐ray transmission tomography. The HCGA calculations with filtered back‐projection (FBP) utilize 8 bits for both head and lung phantoms. The effect of selection operator through proportionate, truncation, and tournament schemes has been analyzed along with the introduction of a mixed‐selection scheme. Image quality has been measured using root‐mean‐squared error, Euclidean error and peak signal‐to‐noise ratios. The sensitivity of reconstructed image quality on various mutation operators, namely standard, gradient‐, and offset‐based schemes, has been analyzed along with the effect of number of projections. The number of projections has resulted in maximization of image quality while minimizing the radiation hazard involved. The results of HCGA have been compared with FBP as a deterministic technique and simulated annealing (SA) as a stochastic technique for IRT approximation. For the 8 × 8 head and lung phantoms, HCGA, SA, FBP, have resulted PSNR values correspondingly as 40.47, 33.92, 8.28 and 26.38, 20.36, 12.98 dB. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 10–21, 2007  相似文献   

3.
The iterative maximum‐likelihood expectation‐maximization (ML‐EM) algorithm is an excellent algorithm for image reconstruction and usually provides better images than the filtered backprojection (FBP) algorithm. However, a windowed FBP algorithm can outperform the ML‐EM in certain occasions, when the least‐squared difference from the true image, that is, the least‐squared error (LSE), is used as the comparison criterion. Computer simulations were carried out for the two algorithms. For a given data set the best reconstruction (compared to the true image) from each algorithm was first obtained, and the two reconstructions are compared. The stopping iteration number of the ML‐EM algorithm and the parameters of the windowed FBP algorithm were determined, so that they produced an image that was closest to the true image. However, to use the LSE criterion to compare algorithms, one must know the true image. How to select the optimal parameters when the true image is unknown is a practical open problem. For noisy Poisson projections, computer simulation results indicate that the ML‐EM images are better than the regular FBP images, and the windowed FBP algorithm images are better than the ML‐EM images. For the noiseless projections, the FBP algorithms outperform the ML‐EM algorithm. The computer simulations reveal that the windowed FBP algorithm can provide a reconstruction that is closer to the true image than the ML‐EM algorithm. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 114–120, 2012  相似文献   

4.
王晓红  曾静  麻祥才  刘芳 《包装工程》2020,41(15):245-252
目的为了有效地去除多种图像模糊,提高图像质量,提出基于深度强化学习的图像去模糊方法。方法选用GoPro与DIV2K这2个数据集进行实验,以峰值信噪比(PSNR)和结构相似性(SSIM)为客观评价指标。通过卷积神经网络获得模糊图像的高维特征,利用深度强化学习结合多种CNN去模糊工具建立去模糊框架,将峰值信噪比(PSNR)作为训练奖励评价函数,来选择最优修复策略,逐步对模糊图像进行修复。结果通过训练与测试,与现有的主流算法相比,文中方法有着更好的主观视觉效果,且PSNR值与SSIM值都有更好的表现。结论实验结果表明,文中方法能有效地解决图像的高斯模糊和运动模糊等问题,并取得了良好的视觉效果,在图像去模糊领域具有一定的参考价值。  相似文献   

5.
Mondal PP  Rajan K 《Applied optics》2005,44(30):6345-6352
Positron emission tomography (PET) is one of the key molecular imaging modalities in medicine and biology. Penalized iterative image reconstruction algorithms frequently used in PET are based on maximum-likelihood (ML) and maximum a posterior (MAP) estimation techniques. The ML algorithm produces noisy artifacts whereas the MAP algorithm eliminates noisy artifacts by utilizing availableprior information in the reconstruction process. The MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class and irrespective of the strength of interaction between the nearest neighbors. A Hebbian neural learning scheme is proposed to model the nature of interpixel interaction to reconstruct artifact-free edge preserving reconstruction. A key motivation of the proposed approach is to avoid oversmoothing across edges that is often the case with MAP algorithms. It is assumed that local correlation plays a significant role in PET image reconstruction, and proper modeling of correlation weight (which defines the strength of interpixel interaction) is essential to generate artifact-free reconstruction. The Hebbian learning-based approach modifies the interaction weight by adding a small correction that is proportional to the product of the input signal (neighborhood pixels) and output signal. Quantitative analysis shows that the Hebbian learning-based adaptive weight adjustment approach is capable of producing better reconstructed images compared with those reconstructed by conventional ML and MAP-based algorithms in PET image reconstruction.  相似文献   

6.
根据BP神经网络图像压缩处理中,存在对图像信息高低频部分处理质量不同和边缘效应等问题,提出了采用JPEG基线算法于BP神经网络图像压缩处理结构中,建立了该系统。并采用灰阶Lena图像进行实验,通过实验分析发现,采用这种新的结构来处理图像,不仅可以得到较大的压缩比,而且具有较好的峰值信噪比。实验结果证明这种具有自适应性的图像处理方法,不仅可行,而且能高效、稳定地重建图像。  相似文献   

7.
Image compression technique is used to reduce the number of bits required in representing image, which helps to reduce the storage space and transmission cost. Image compression techniques are widely used in many applications especially, medical field. Large amount of medical image sequences are available in various hospitals and medical organizations. Large images can be compressed into smaller size images, so that the memory occupation of the image is considerably reduced. Image compression techniques are used to reduce the number of pixels in the input image, which is also used to reduce the broadcast and transmission cost in efficient form. This is capable by compressing different types of medical images giving better compression ratio (CR), low mean square error (MSE), bits per pixel (BPP), high peak signal to noise ratio (PSNR), input image memory size and size of the compressed image, minimum memory requirement and computational time. The pixels and the other contents of the images are less variant during the compression process. This work outlines the different compression methods such as Huffman, fractal, neural network back propagation (NNBP) and neural network radial basis function (NNRBF) applied to medical images such as MR and CT images. Experimental results show that the NNRBF technique achieves a higher CR, BPP and PSNR, with less MSE on CT and MR images when compared with Huffman, fractal and NNBP techniques.  相似文献   

8.
A tomographic reconstruction technique valid for line sources, curved detector arrays, and large object is presented. For acquisitions involving a curved detector array, inverse diffraction is first used to propagate the field back to a straight line and then the standard filtered backpropagation (FBP) algorithm is employed to reconstruct the image. Using inverse diffraction the measured field can be accurately propagated all the way back to the reconstruction area. Thus an essential improvement is obtained compared to using the approximate backpropagation of Rytov data contained in the FBP algorithm, which becomes inaccurate when the distance from the measurement surface to the reconstruction area is large. This technique is applied to measured data and it is shown that it gives reconstructions of high quality, both with respect to geometry and velocity. It is also shown that, when the illuminating wave is cylindrical rather than plane, segmentation of the image can be used in combination with inverse diffraction and FBP reconstruction to obtain high-quality images of large objects.  相似文献   

9.
A model is developed for predicting the correlation between processing parameters and the technical target of double glow by applying artificial neural network (ANN). The input parameters of the neural network (NN) are source voltage, workpiece voltage, working pressure and distance between source electrode and workpiece. The output of the NN model is three important technical targets, namely the gross element content, the thickness of surface alloying layer and the absorpticm rate (the ratio of the mass loss of source materials to the increasing mass of workpiece) in the processing of double glow plasma surface alloying. The processing parameters and technical target are then used as a training set for an artificial neural network. The model is based on multiplayer feedforward neural network. A very good performance of the neural network is achieved and the calculated results are in good agreement with the experimental ones.  相似文献   

10.
Positron emission tomography (PET) is becoming increasingly important in the fields of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation for image reconstruction in emission tomography place conditions on which types of images are accepted as solutions. The recently introduced median root prior (MRP) favors locally monotonic images. MRP can preserve sharp edges, but a steplike streaking effect and much noise are still observed in the reconstructed image, both of which are undesirable. An MRP tomography reconstruction combined with nonlinear anisotropic diffusion interfiltering is proposed for removing noise and preserving edges. Analysis shows that the proposed algorithm is capable of producing better reconstructed images compared with those reconstructed by conventional maximum-likelihood expectation maximization (MLEM), MAP, and MRP-based algorithms in PET image reconstruction.  相似文献   

11.
感性工学在产品配色设计中的应用研究   总被引:2,自引:0,他引:2  
孙菁  潘长学 《包装工程》2007,28(5):91-93
在感性工学的研究领域中,类神经网络被认为是颇具潜力的分析工具.为了降低产品配色的意象偏差,基于配色理论设计问卷,配合类神经遗传模拟消费者的意象评价,提供符合产品形态的配色建议,并在此基础上提出了计算机辅助配色设计系统的框架和实现方法.  相似文献   

12.
李梅  张二虎 《包装工程》2022,43(11):283-291
目的 运用现有的逆半调方法恢复的图像存在着半色调网纹去除不够理想、图像细节恢复不够清晰等问题,为了进一步提高逆半调图像在平滑区域和纹理细节方面的质量,提出一种基于融合注意力机制的多尺度卷积神经网络的逆半调方法。方法 首先,根据半色调图像网点噪声多频分布特点,设计多尺度卷积网络为基础结构的深度学习网络,从多个不同的尺度抑制半色调网纹并恢复不同尺度的图像信息;然后,应用注意力机制重建图像信息,从而生成逆半调图像;最后,提出多任务损失函数加速网络优化,更好地实现逆半调。结果 实验结果表明,运用此方法得到的逆半调图像在视觉上与原始图像更为相近,恢复出的图像细节更好;在客观评价方面,通过与现有的最先进的方法相比,峰值信噪比平均值提高了0.562~10.095 dB,结构相似度平均值提高了0.01~0.171。结论 该方法可以实现半色调图像的高质量恢复。  相似文献   

13.
对于单元发,阵接收的模型,采用反射回波的包络作为投影,按滤波后投影算法(FBP法)进行重建,得到物体内某一截面的重建像,然后调整接收阵的位置,获取多层截面的投影并重建成像,运用体绘制法(VolumeRenderingImaging)绘制了三维像,图像效果不好,分辨率不高,针对这一问题,文中应用了电子延迟方法处理接收信号,即对接收阵各阵元接收到的回波号依照散射点相对接收中阵的空间位置关系进行一定时间  相似文献   

14.
神经网络模型在SiC涂层制备中的应用   总被引:11,自引:0,他引:11  
材料表面抗氧化涂层的质量是限制碳/碳复合材料作为高温结构材料使用的关键.本文运用人工神经网络技术建立了CVD-SiC涂层制备工艺的过程模型,以解决该过程影响因素众多、相互作用关系复杂、难以对制备过程进行有效的预测和控制的问题.研究结果表明:所建立的神经网络模型,可以比较准确和全面地反映各工艺因素对SiC-CVD过程的影响大小及内在规律;模型对工艺参数与沉积速率之间关系的预测与实验结果相吻合;证实了将人工神经网络模型应用于抗氧化涂层的制备过程的控制和工艺优化是有效和可行的.  相似文献   

15.
遗传算法—神经网络结构控制系统研究   总被引:6,自引:0,他引:6  
提出了将遗传算法及神经网络应用于结构主动控制的新方法。该方法利用遗传算法在线计算控制力,利用神经网络模拟结构的动力特性,从而代替结构进行动力分析。该系统充分发挥了遗传算法及神经网络各自的特点,是非常具有发展前途的新型的主动控制系统。  相似文献   

16.
A multicriterion cross-entropy minimization approach to positron emission tomographic (PET) imaging is described. An unexplored multicriterion cross-entropy optimization algorithm based on weighted-sum scalarization is used to solve this problem. The efficacy of the algorithm is compared with that of the single-criterion optimization algorithm and the convolution backprojection method for image reconstruction from computer-generated projection data and Siemens PET scanner data. The algorithms described have been implemented on a PIII/686 microcomputer.  相似文献   

17.
The study proposes a convex combination (CC) algorithm to fast and stably train a neural network (NN) model for crash injury severity prediction, and a modified NN pruning for function approximation (N2PFA) algorithm to optimize the network structure. To demonstrate the proposed approaches and to compare them with the NN trained by traditional back-propagation (BP) algorithm and an ordered logit (OL) model, a two-vehicle crash dataset in 2006 provided by the Florida Department of Highway Safety and Motor Vehicles (DHSMV) was employed. According to the results, the CC algorithm outperforms the BP algorithm both in convergence ability and training speed. Compared with a fully connected NN, the optimized NN contains much less network nodes and achieves comparable classification accuracy. Both of them have better fitting and predicting performance than the OL model, which again demonstrates the NN’s superiority over statistical models for predicting crash injury severity. The pruned input nodes also justify the ability of the structure optimization method for identifying the factors irrelevant to crash-injury outcomes. A sensitivity analysis of the optimized NN is further conducted to determine the explanatory variables’ impact on each injury severity outcome. While most of the results conform to the coefficient estimation in the OL model and previous studies, some variables are found to have non-linear relationships with injury severity, which further verifies the strength of the proposed method.  相似文献   

18.
The optimization of network topologies to retain the generalization ability by deciding when to stop overtraining an artificial neural network (ANN) is an existing vital challenge in ANN prediction works. The larger the dataset the ANN is trained with, the better generalization the prediction can give. In this paper, a large dataset of atmospheric corrosion data of carbon steel compiled from several resources is used to train and test a multilayer backpropagation ANN model as well as two conventional corrosion prediction models (linear and Klinesmith models). Unlike previous related works, a grid searchbased hyperparameter tuning is performed to develop multiple hyperparameter combinations (network topologies) to train multiple ANNs with mini-batch stochastic gradient descent optimization algorithm to facilitate the training of a large dataset. After that, one selection strategy for the optimal hyperparameter combination is applied by an early stopping method to guarantee the generalization ability of the optimal network model. The correlation coefficients (R) of the ANN model can explain about 80% (more than 75%) of the variance of atmospheric corrosion of carbon steel, and the root mean square errors (RMSE) of three models show that the ANN model gives a better performance than the other two models with acceptable generalization. The influence of input parameters on the output is highlighted by using the fuzzy curve analysis method. The result reveals that TOW, Cl- and SO2 are the most important atmospheric chemical variables, which have a well-known nonlinear relationship with atmospheric corrosion.  相似文献   

19.
In medical imaging an enormous variety of algorithms have been proposed to reconstruct a cross section of the human body. In assessing the relative task-oriented performance of reconstruction algorithms, it is desirable to assign statistical significance to claims of superiority of one algorithm over another. However, very often the achievement of statistical significance demands a large number of observations. Performing such an evaluation on mathematical phantoms requires a means of running the competing algorithms on projection data obtained from a large number of randomly generated phantoms. Thereafter, various numerical measures of agreement between the reconstructed images and the original phantoms may be used to reach a conclusion which has some statistical substance. In this article we describe the software SuperSNARK, which automates an evaluation methodology for assigning statistical significance to the observed differences in performance of two or more image reconstruction algorithms. As a demonstration, we compare the relative efficacy of the maximum likelihood expectation maximization (ML-EM) algorithm and the filtered backprojection (FBP) method for performing three medical tasks in positron emission tomography (PET)—estimation of total uptake by structures, detection of relatively higher uptake between pairs of symmetric structures, and estimation of uptake at individual points within structures. We find that for estimating total uptake ML-EM outperforms FBP, for detecting relatively higher uptake there is not a statistically significant difference between the two methods, and for estimating pointwise uptake FBP outperforms ML-EM. It is demonstrated that SuperSNARK makes it easy to apply the methodology of statistical hypothesis testing to substantiate such claims of task-specific superiority of one reconstruction algorithm over another. © 1996 John Wiley & Sons, Inc.  相似文献   

20.
人工神经网络应用于空调系统故障诊断的研究   总被引:2,自引:0,他引:2  
本文首先介绍了人工神经网络的基本原理,然后详细介绍了反向传播算法(BP),最后研究BP算法在空调系统故障诊断方面的应用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号