首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
An image reconstruction algorithm is supposed to present an image that contains medically relevant information that exists in a cross section of the human body. There is an enormous variety of such algorithms. The question arises: Given a specific medical problem, what is the relative merit of two image reconstruction algorithms in presenting images that are helpful for solving the problem? An approach to answering this question with a high degree of confidence is that of ROC analysis of human observer performance. The problem with ROC studies using human observers is their complexity (and, hence, cost). To overcome this problem, it has been suggested to replace the human observer by a numerical observer. An even simpler approach is by the use of distance metrics, such as the root mean squared distance, between the reconstructed images and the known originals. For any of these approaches, the evaluation should be done using a sample set that is large enough to provide us with a statistically significant result. We concentrate in this paper on the numerical observer approach, and we reintroduce in this framework the notion of the Hotelling Trace Criterion, which has recently been proposed as an appropriate evaluator of imaging systems. We propose a definite strategy (based on linear abnormality-index functions that are optimal for the chosen figure of merit) for evaluating image reconstruction algorithms. We give details of two experimental studies that embody the espoused principles. Since ROC analysis of human observer performance is the ultimate yardstick for system assessment, one justifies a numerical observer approach by showing that it yields “similar” results to a human observer study. Also, since simple distance metrics are computationally less cumbersome than are numerical observer studies, one would like to replace the latter by the former, whenever it is likely to give “similar” results. We discuss approaches to assigning a numerical value to the “similarity” of the results produced by two different evaluators. We introduce a new concept, called rank-ordering nearness, which seems to provide us with a promising approach to experimentally determining the similarity of two evaluators of image reconstruction algorithms.  相似文献   

2.
Iterative image deconvolution algorithms generally lack objective criteria for deciding when to terminate iterations, often relying on ad hoc metrics for determining optimal performance. A statistical-information-based analysis of the popular Richardson-Lucy iterative deblurring algorithm is presented after clarification of the detailed nature of noise amplification and resolution recovery as the algorithm iterates. Monitoring the information content of the reconstructed image furnishes an alternative criterion for assessing and stopping such an iterative algorithm. It is straightforward to implement prior knowledge and other conditioning tools in this statistical approach.  相似文献   

3.
Medical images are obtained with computer-aided diagnosis using electronic devices such as CT scanners and MRI machines. The captured computed tomography (CT)/magnetic resonance imaging (MRI) images typically have limited spatial resolution, low contrast, noise and nonuniform variability in intensity due to environmental effects. Therefore, the distinctions of the objects are blurred, distorted and the meanings of the objects are not quite precise. Fuzzy sets and fuzzy logic are best suited for addressing vagueness and ambiguity. Fuzzy clustering technique has been commonly used for segmentation of images throughout the last decade. This study presents a comparative study of 14 fuzzy-clustered image segmentation algorithms used in the CT scan and MRI brain image segments. This study used 17 data sets including 4 synthetic data sets, namely, Bensaid, Diamond, Square, and its noisy version, 5 real-world digital images, and 8 CT scan/MRI brain images to analyze the algorithms. Ground truth images are used for qualitative analysis. Apart from the qualitative analysis, the study also quantitatively evaluated the methods using three validity metrics, namely, partition coefficient, partition entropy, and Fukuyama-Sugeno. After a thorough and careful review of the results, it is observed that extension of the fuzzy C-means (EFCM) outperformed every other image segmentation algorithm, even in a noisy environment, followed by kernel-based FCM σ, the output of which is also very good after EFCM.  相似文献   

4.
An optical security system based on a correlation between two separate binary computer-generated holograms has been developed and experimentally tested. The two holograms are designed using two different iterative algorithms: the projection- onto constrained sets algorithm and the direct binary search (DBS) algorithm. By placing the ready-to-use holograms on a modified joint transform correlator input plane, an output image is constructed as a result of a spatial correlation between the two functions coded by the holograms. Both simulation and experimental results are presented to demonstrate the system's performance. While we concentrate mainly on the DBS algorithm, we also compare the performance of both algorithms.  相似文献   

5.
A novel approach of a discrete self-organising migrating algorithm is introduced to solve the flowshop with blocking scheduling problem. New sampling routines have been developed that propagate the space between solutions in order to drive the algorithm. The two benchmark problem sets of Carlier, Heller, Reeves and Taillard are solved using the new algorithm. The algorithm compares favourably with the published algorithms Differential Evolution, Tabu Search, Genetic Algorithms and their hybrid variants. A number of new upper bounds are obtained for the Taillard problem sets.  相似文献   

6.
用线阵CCD扫描获取圆孔图像,通过图像识别确定圆心,实现孔系位置精密测量。以所提出的识别弦端点确定图像圆心坐标的算法,对500根弦的中点坐标进行滤波处理,可使识别误差接近图像像素距离。采用边缘慢、中部快的变速扫描方案,密化了有效采样点,使圆心坐标在X,Y方向的重复度均小于4μm。  相似文献   

7.
B SOWMYA  B SHEELARANI 《Sadhana》2011,36(2):153-165
This paper explains the task of land cover classification using reformed fuzzy C means. Clustering is the assignment of objects into groups called clusters so that objects from the same cluster are more similar to each other than objects from different clusters. The most basic attribute for clustering of an image is its luminance amplitude for a monochrome image and colour components for a colour image. Since there are more than 16 million colours available in any given image and it is difficult to analyse the image on all of its colours, the likely colours are grouped together by clustering techniques. For that purpose reformed fuzzy C means algorithm has been used. The segmented images are compared using image quality metrics. The image quality metrics used are peak signal to noise ratio (PSNR), error image and compression ratio. The time taken for image segmentation is also used as a comparison parameter. The techniques have been applied to classify the land cover.  相似文献   

8.
金燕  周勇亮  陈彪 《光电工程》2012,39(5):85-90
随机Hough变换和随机圆检测算法是图像中检测圆轮廓的快速方法,但在实际应用中分别在速度和精度上有不足.将上述算法中的随机采样分布、采样累积分布和采样次数阈值归为采样约束问题,将代理点计算出的参数与真实参数的偏差归为参数校准问题.经分析上述问题,将改进的随机圆检测算法作为快速识别方法,将随机圆Hough变换作为校准方法,结合两者的优点提出一种基于识别-校准框架的高效圆检测算法.实验数据证明,在噪声和不理想圆轮廓条件下,该框架能够很好地平衡检测速度与精度,从而体现出算法的高效性.  相似文献   

9.
李玉蓉 《光电工程》2007,34(9):124-128
本文提出了一种新的彩色图像量化算法.它是一种基于自组织神经网络和线性像素置换的后聚类算法.线性像素置换是一种均匀选取图像中的像素的方法.根据线性像素置换确定改进的自组织神经网络的初始权重向量和训练样本集.选取部分样本参加训练加快训练过程.实验结果表明,与其它量化优化算法比较,本文提出的算法在量化图像质量和算法效率方面均有明显提高,而且不依赖于算法的初始条件.  相似文献   

10.
Several powerful iterative algorithms are being developed for the restoration and superresolution of diffraction-limited imagery data by use of diverse mathematical techniques. Notwithstanding the mathematical sophistication of the approaches used in their development and the potential for resolution enhancement possible with their implementation, the spectrum extrapolation that is central to superresolution comes in these algorithms only as a by-product and needs to be checked only after the completion of the processing steps to ensure that an expansion of the image bandwidth has indeed occurred. To overcome this limitation, a new approach of mathematically extrapolating the image spectrum and employing it to design constraint sets for implementing set-theoretic estimation procedures is described. Performance evaluation of a specific projection-onto-convex-sets algorithm by using this approach for the restoration and superresolution of degraded images is outlined. The primary goal of the method presented is to expand the power spectrum of the input image beyond the range of the sensor that captured the image.  相似文献   

11.
The iterative maximum‐likelihood expectation‐maximization (ML‐EM) algorithm is an excellent algorithm for image reconstruction and usually provides better images than the filtered backprojection (FBP) algorithm. However, a windowed FBP algorithm can outperform the ML‐EM in certain occasions, when the least‐squared difference from the true image, that is, the least‐squared error (LSE), is used as the comparison criterion. Computer simulations were carried out for the two algorithms. For a given data set the best reconstruction (compared to the true image) from each algorithm was first obtained, and the two reconstructions are compared. The stopping iteration number of the ML‐EM algorithm and the parameters of the windowed FBP algorithm were determined, so that they produced an image that was closest to the true image. However, to use the LSE criterion to compare algorithms, one must know the true image. How to select the optimal parameters when the true image is unknown is a practical open problem. For noisy Poisson projections, computer simulation results indicate that the ML‐EM images are better than the regular FBP images, and the windowed FBP algorithm images are better than the ML‐EM images. For the noiseless projections, the FBP algorithms outperform the ML‐EM algorithm. The computer simulations reveal that the windowed FBP algorithm can provide a reconstruction that is closer to the true image than the ML‐EM algorithm. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 114–120, 2012  相似文献   

12.
An evaluation of the suitability of eight existing phase unwrapping algorithms to be used in a real-time optical body surface sensor based on Fourier fringe profilometry is presented. The algorithms are assessed on both the robustness of the results they give and their speed of execution. The algorithms are evaluated using four sets of real human body surface data, each containing five-hundred frames, obtained from patients undergoing radiotherapy, where fringe discontinuity is significant. We also present modifications to an existing algorithm, noncontinuous quality-guided path algorithm (NCQUAL), in order to decrease its execution time by a factor of 4 to make it suitable for use in a real-time system. The results obtained from the modified algorithm are compared with those of the existing algorithms. Three suitable algorithms were identified: two-stage noncontinuous quality-guided path algorithm (TSNCQUAL)-the modified algorithm presented here-for online processing and Flynn's minimum discontinuity algorithm (FLYNN) and preconditioned conjugate gradient method (PCG) algorithms for enhanced accuracy in off-line processing.  相似文献   

13.
A rapid high-precision measurement of distorted power is described. The measurement is based on digital filtering techniques. The convergence conditions of two previous sampling algorithms are derived, and their frequency responses are analyzed. An adjustable window used for measuring distorted power is also proposed. It is demonstrated that the sampling algorithm using the window leads to significant savings in measuring and computing time, and high accuracy can be obtained  相似文献   

14.
Coordinate measuring machines (CMMs) are used to examine the conformity of the produced parts with the designer's intent. The inspection of free-form surfaces is a difficult process due to their complexity and irregularity. Many tasks are performed to ensure a reliable and efficient inspection using CMMs. Sampling is an essential and vital step in inspection planning. Efficient and reliable approaches to determine the locations of the points to be sampled from free-form surfaces using the CMM were developed. Four heuristic algorithms for sampling based on the NURBS features of free-form surfaces are presented. The sampling criteria are equiparametric, surface patch size and the surface patch mean curvature. An algorithm for automatic selection of sampling algorithms performs complexity checks on NURBS surfaces, including the surface curvature changes and surface patch size changes, and selects the suitable sampling algorithm. Extensive simulations were performed using the developed methodologies to evaluate their performance using free-form surfaces with different degrees of complexity and compared with the uniform sampling pattern. The CMM measurement errors and manufacturing form errors have been simulated in these studies. The developed algorithms provide a useful tool in selecting the effective sampling plans for the tactile CMM inspection planning of free-form surfaces.  相似文献   

15.
蔡念  张海员  张楠  潘睛 《工程图学学报》2012,33(1):50-55,14
为了尽可能地保持图像的基本信息,提高图像的视觉效果和空间分辨率,提出一种基于小波的改进加权抛物线插值算法,即在传统的加权抛物线算法上增加插值的误差补偿项。利用sobel算子设定插值点的边缘方向,得到初始放大图像。利用小波变换提取高频成份,原始图像幅值增强充当低频部分,再经过小波逆变换得到高分辨率图像。实验结果表明,相对于传统的图像放大算法,该算法考虑到全局相关性,得到更加清晰的边缘信息。  相似文献   

16.
This study explores the use of teaching-learning-based optimization (TLBO) and artificial bee colony (ABC) algorithms for determining the optimum operating conditions of combined Brayton and inverse Brayton cycles. Maximization of thermal efficiency and specific work of the system are considered as the objective functions and are treated simultaneously for multi-objective optimization. Upper cycle pressure ratio and bottom cycle expansion pressure of the system are considered as design variables for the multi-objective optimization. An application example is presented to demonstrate the effectiveness and accuracy of the proposed algorithms. The results of optimization using the proposed algorithms are validated by comparing with those obtained by using the genetic algorithm (GA) and particle swarm optimization (PSO) on the same example. Improvement in the results is obtained by the proposed algorithms. The results of effect of variation of the algorithm parameters on the convergence and fitness values of the objective functions are reported.  相似文献   

17.
Image denoising has been considered as an essential image processing problem that is difficult to address. In this study, two image denoising algorithms based on fractional calculus operators are proposed. The first algorithm uses the convolution of covariance of fractional Gaussian fields with the fractional sincα (FS) (sinc function of order α). The second algorithm uses the convolution of covariance of fractional Gaussian fields with the fractional differential Heaviside function, which is the limit of FS. In the proposed algorithms, the given noisy image is processed in a blockwise manner. Each processed pixel is convolved with the mask windows on four directions. The final filtered image based on either FS or fractional differential Heaviside function (FDHS) can be obtained by determining the average magnitude of the four convolution test results for each filter mask windows. The outcomes are evaluated using visual perception and peak signal to noise ratio. Experiments prove the effectiveness of the proposed algorithms in removing Gaussian and Speckle noise. The proposed FS and FDHS achieved average PSNR of 28.88, 28.26?dB, respectively, for Gaussian noise. The improvements outperform those achieved with the use of Gaussian and Wiener filters.  相似文献   

18.
Stern A  Porat Y  Ben-Dor A  Kopeika NS 《Applied optics》2001,40(26):4706-4715
An algorithm to increase the spatial resolution of digital video sequences captured with a camera that is subject to mechanical vibration is developed. The blur caused by vibration of the camera is often the primary cause for image degradation. We address the degradation caused by low-frequency vibrations (vibrations for which the exposure time is less than the vibration period). The blur caused by low-frequency vibrations differs from other types by having a random shape and displacement. The different displacement of each frame makes the approach used in superresolution (SR) algorithms suitable for resolution enhancement. However, SR algorithms that were developed for general types of blur should be adapted to the specific characteristics of low-frequency vibration blur. We use the method of projection onto convex sets together with a motion estimation method specially adapted to low-frequency vibration blur characteristics. We also show that the random blur characterizing low-frequency vibration requires selection of the frames prior to processing. The restoration performance as well as the frame selection criteria is dependent mainly on the motion estimation precision.  相似文献   

19.
Schmit J  Creath K 《Applied optics》1995,34(19):3610-3619
Phase-shifting interferometry suffers from two main sources of error: phase-shift miscalibration and detector nonlinearity. Algorithms that calculate the phase of a measured wave front require a high degree of tolerance for these error sources. An extended method for deriving such error-compensating algorithms patterned on the sequential application of the averaging technique is proposed here. Two classes of algorithms were derived. One class is based on the popular three-frame technique, and the other class is based on the 4-frame technique. The derivation of algorithms in these classes was calculated for algorithms with up to six frames. The new 5-frame algorithm and two new 6-frame algorithms have smaller phase errors caused by phase-shifter miscalibration than any of the common 3-, 4- or 5-frame algorithms. An analysis of the errors resulting from algorithms in both classes is provided by computer simulation and by an investigation of the spectra of sampling functions.  相似文献   

20.
An ultrasound-device image-reconstruction algorithm has been described previously that uses orthonomal wavelets as the basis of a transform space. The transform algorithms make it possible to analyze the reflected ultrasound signal from a sample to produce a map of one of its internal properties, the acoustical impedance. Conventional wavelets do not exhibit translation invariance, the lack of which ofttimes generates nonzero expansion coefficients for wavelets of lower sequency than the transmitted signal. By a transformation of basis to a set of functions which exhibit a form of translation invariance the aforementioned problem is removed. However, the new functions are no longer orthogonal. An algorithm is described to perform this transformation extremely efficiently. Also described is an algorithm to unsmear the image due to the fact that the transmitted signal may not be a single wavelet but instead is a short sequence (linear combination) of wavelets. The coefficients of the array used to deconvolve the signal are determined by performing a forward wavelet transformation on the transmitted signal itself.©1994 John Wiley & Sons Inc  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号