首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Presents a new method for endocardial (inner) and epicardial (outer) contour estimation from sequences of echocardiographic images. The framework herein introduced is fine-tuned for parasternal short axis views at the papillary muscle level. The underlying model is probabilistic; it captures the relevant features of the image generation physical mechanisms and of the heart morphology. Contour sequences are assumed to be two-dimensional noncausal first-order Markov random processes; each variable has a spatial index and a temporal index. The image pixels are modeled as Rayleigh distributed random variables with means depending on their positions (inside endocardium, between endocardium and pericardium, or outside pericardium). The complete probabilistic model is built under the Bayesian framework. As estimation criterion the maximum a posteriori (MAP) is adopted. To solve the optimization problem, one is led to (joint estimation of contours and distributions' parameters), the authors introduce an algorithm herein named iterative multigrid dynamic programming (IMDP). It is a fully data-driven scheme with no ad-hoc parameters. The method is implemented on an ordinary workstation, leading to computation times compatible with operational use. Experiments with simulated and real images are presented.  相似文献   

2.
一种用于分析条纹图案的自动相位测量法   总被引:2,自引:0,他引:2  
唐寿鸿  金观昌 《中国激光》1991,18(6):405-408
本文提出一种以离散余弦变换(DCT)为基础的自动相位测量方法并以实验和模拟运算进行验证。  相似文献   

3.
This paper addresses the problem of correlation estimation in sets of compressed images. We consider a framework where the images are represented under the form of linear measurements due to low complexity sensing or security requirements. We assume that the images are correlated through the displacement of visual objects due to motion or viewpoint change and the correlation is effectively represented by optical flow or motion field models. The correlation is estimated in the compressed domain by jointly processing the linear measurements. We first show that the correlated images can be efficiently related using a linear operator. Using this linear relationship we then describe the dependencies between images in the compressed domain. We further cast a regularized optimization problem where the correlation is estimated in order to satisfy both data consistency and motion smoothness objectives with a Graph Cut algorithm. We analyze in detail the correlation estimation performance and quantify the penalty due to image compression. Extensive experiments in stereo and video imaging applications show that our novel solution stays competitive with methods that implement complex image reconstruction steps prior to correlation estimation. We finally use the estimated correlation in a novel joint image reconstruction scheme that is based on an optimization problem with sparsity priors on the reconstructed images. Additional experiments show that our correlation estimation algorithm leads to an effective reconstruction of pairs of images in distributed image coding schemes that outperform independent reconstruction algorithms by 2–4 dB.  相似文献   

4.
干涉条纹中心线提取与细化的新方法   总被引:4,自引:0,他引:4  
干涉条纹分析是光学干涉计量的重要分析方法,本文提出基于改进Yangtagai求极值点法与改进Hilditch细化法相结合的干涉条纹中心线提取与细化方法,该方法具有条纹断点少、抗噪声能力强以及处理速度快的特点。通过实验证实了本文提出方法的有效性。  相似文献   

5.
The problem of determining font metrics from measurements on images of typeset text is discussed, and least-squares procedures for font metric estimation are developed. When it is shown that kerning is not present, sidebearing estimation involves solving a set of linear equations, called the sidebearing normal equations. More generally, simultaneous sidebearing and kerning term estimation involves an iterative procedure in which a modified set of sidebearing normal equations is solved during each iteration. Character depth estimates are obtained by solving a set of baseline normal equations. In a preliminary evaluation of the proposed procedures on scanned text images in three fonts, the root-mean-square set width estimation error was about 0.2 pixel. An application of font metric estimation to text image editing is discussed.  相似文献   

6.
光栅条纹中心的精确提取会直接影响到线结构光视觉的测量与重建精度。为了解决传统的Steger算法计算量大,且在光条不均匀时提取不精确等问题。基于改进的Steger算法,提出了针对光栅条纹中心的快速亚像素坐标提取方法。通过高斯滤波、形态学等方法得到去复杂背景的激光条纹图像,用自适应阈值法缩小ROI区域,用霍夫变化求取光栅边缘线,求出边缘线中点,然后直线拟合,最后用改进算法提取中心坐标。实验结果表明:与模板法、传统Steger算法等光栅条纹中心提取算法相比,算法的鲁棒性更强,最大提取误差在0.3个像素,平均误差为0.1个像素,平均运行时间相比传统算法减少0.4 s。  相似文献   

7.
The authors present a new illuminant-tilt-angle estimator that works with isotropic textures. It has been developed from Kube and Pentland's (1988) frequency-domain model of images of three-dimensional texture, and is compared with Knill's (1990) spatial-domain estimator. The frequency and spatial-domain theory behind each of the estimators is related via an alternative proof of the basic phenomena that both estimators exploit: that is that the variance of the partial derivative of the image is at a maximum when the partial derivative is taken in the direction of the illuminant's tilt. Results obtained using both simulated and real textures suggest that the frequency-domain estimator is more accurate  相似文献   

8.
A new approach, in a framework of an eigenstructure method using a Hankel matrix, is developed for sinusoidal signal retrieval in white noise. A closed-form solution for the singular pairs of the matrix is defined in terms of the associated sinusoidal signals and noise. The estimated sinusoidal singular vectors are applied to form the noise-free Hankel matrix. A pattern recognition technique is proposed for partitioning signal and noise subspaces based on the singular pairs of the Hankel matrix. Three types of cluster structures in an eigen-spectrum plot are identified: well-separated, touching, and overlapping. The overlapping, which is the most difficult case, corresponds to a low signal-to noise ratio (SNR). Optimization of Hankel matrix dimensions is suggested for enhancing separability of cluster structures. Once features have been extracted from both singular value and singular vector data, a fuzzy classifier is used to identify each singular component. Computer simulations have shown that the method is effective for the case of “touching” data and provides reasonably good results for a sinusoidal signal reconstruction in the time domain. The limitations of the method are also discussed  相似文献   

9.
10.
In this paper a physical interpretation is given of a method for the estimation of movements in television images. The method, already presented by the author in other papers, is based on a linear regression of the image derivatives, both spatial and temporal. It is shown that a centered finite differences approximation of the image differentials is mandatory to obtain good performances. Experimental results are issued to support theoretical conclusions.  相似文献   

11.
In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare, which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate scatter-glare distribution in DSA images without the need to sample the scatter-glare intensity for each patient. This technique utilizes exposure parameters and image gray levels to assign equivalent Lucite thickness for every pixel in the image. The thickness information is then used to estimate scatter-glare intensity on a pixel-by-pixel basis. To test its ability to estimate scatter-glare intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates was obtained by comparison with direct scatter-glare measurements made behind a lead strip. The average rms percentage errors in the scatter-glare estimate for the 25 phantom studies and the 17 animal studies were 6.44% and 7.96%, respectively. These results indicate that the scatter-glare intensity can be estimated with adequate accuracy for a wide range of thicknesses, projections, and beam energies using exposure parameters and gray level information  相似文献   

12.
A method for modeling noise in medical images   总被引:1,自引:0,他引:1  
We have developed a method to study the statistical properties of the noise found in various medical images. The method is specifically designed for types of noise with uncorrelated fluctuations. Such signal fluctuations generally originate in the physical processes of imaging rather than in the tissue textures. Various types of noise (e.g., photon, electronics, and quantization) often contribute to degrade medical images; the overall noise is generally assumed to be additive with a zero-mean, constant-variance Gaussian distribution. However, statistical analysis suggests that the noise variance could be better modeled by a nonlinear function of the image intensity depending on external parameters related to the image acquisition protocol. We present a method to extract the relationship between an image intensity and the noise variance and to evaluate the corresponding parameters. The method was applied successfully to magnetic resonance images with different acquisition sequences and to several types of X-ray images.  相似文献   

13.
This paper proposes a novel solution to two-dimensional (2-D) frequency estimation problems. The solution is applicable to the cases where the data length is much larger in one dimension than the other. The method estimates 2-D frequencies based on several one-dimensional (1-D) frequency estimation processes and hence has a low computational complexity. To avoid resolving close frequencies in 1-D processing, we construct matrices to estimate the linear combinations of the 2-D frequencies. Performance evaluation of this method is presented based on the comparison of the Crame/spl acute/r-Rao bound (CRB) and numerical simulations.  相似文献   

14.
A method for processing of graphical information is proposed. The method makes it possible to code contour images with the use of complex numbers unambiguously defined by the image shape. Mapping of a noise bitmap image onto the complex plane is studied. The possibility of solving such recognition problems as object identification and determination of the orientation of a figure in a plane is demonstrated.  相似文献   

15.
The problem of estimating the parameters of a model for bidimensional data made up by a linear combination of damped two-dimensional sinusoids is considered. Frequencies, amplitudes, phases, and damping factors are estimated by applying a generalization of the monodimensional Prony's method to spatial data. This procedure finds the desired quantities after an autoregressive model fitting to the data, a polynomial rooting, and a least-squares problem solution. The autoregressive models involved have a particular nature that simplifies the analysis. In fact, their characteristic polynomial factors into two parts so that many of their properties can be easily determined. Quick estimates of the parameters computed are found by using standard one-dimensional autoregressive estimation methods. An iterative procedure for refining the autoregressive parameters estimates which gives rise to better frequency estimates is also discussed. Some simulation results are reported  相似文献   

16.
传统的信道估计方法往往假设多径分量数已知且为常数,粒子滤波算法可估计服从高斯分布的时变信道。实际无线环境中,多径分量数目与幅度皆为时变,则粒子滤波估计方法性能恶化。本文提出基于二进制粒子群算法和卡尔曼滤波的MIMO-OFDM信道混合估计方法。采用随机集建模MIMO信道,并分析得到其多径转移概率模型;基于此模型将信道分解为离散部分和连续部分,推导得到此两部分与整体信道关系;采用二进制粒子群算法拟合信道离散部分,利用卡尔曼滤波估计信道幅度,将利用信道估计计算得到的观测值与真实观测值的近似程度作为适应度函数。仿真结果表明:本文所提出的信道估计方法性能优于基本粒子滤波的信道估计方法。  相似文献   

17.
一种人脸姿势估计新方法   总被引:2,自引:1,他引:2  
本文提出了一种人脸姿势估计方法,基于单幅人脸图像的五个特征点:左右外眼角,左右嘴角和鼻尖估计人脸姿势,论文首先给出人脸姿势表示——旋转角度((?),ψ,θ),然后介绍了姿势估计方法,最后的实验结果证明了本文算法的有效性。  相似文献   

18.
Look-up tables (LUTs) are a common method for increasing the speed of many algorithms. Their use can be extended to the reconstruction of nonuniformly sampled k-space data using either a discrete Fourier transform (DFT) algorithm or a convolution-based gridding algorithm. A table for the DFT would be precalculated arrays of weights describing how each data point affects all of image space. A table for a convolution-based gridding operation would be a precalculated table of weights describing how each data point affects a small k-space neighborhood. These LUT methods were implemented in C++ on a modest personal computer system; they allowed a radial k-space acquisition sequence, consisting of 180 views of 256 points each, to be gridded in 36.2 ms, or, in approximately 800 ns/point. By comparison, a similar implementation of the gridding operation, without LUTs, required 45 times longer (1639.2 ms) to grid the same data. This was possible even while using a 4 x 4 Kaiser-Bessel convolution kernel, which is larger than typically used. These table-based computations will allow real time reconstruction in the future and can currently be run concurrently with the acquisition allowing for completely real-time gridding.  相似文献   

19.
JPEG compression history estimation for color images   总被引:5,自引:0,他引:5  
We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号