首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The estimation of dynamically evolving ellipsoids from noisy lower-dimensional projections is examined. In particular, this work describes a model-based approach using geometric reconstruction and recursive estimation techniques to obtain a dynamic estimate of left-ventricular ejection fraction from a gated set of planar myocardial perfusion images. The proposed approach differs from current ejection fraction estimation techniques both in the imaging modality used and in the subsequent processing which yields a dynamic ejection fraction estimate. For this work, the left ventricle is modeled as a dynamically evolving three-dimensional (3-D) ellipsoid. The left-ventricular outline observed in the myocardial perfusion images is then modeled as a dynamic, two-dimensional (2-D) ellipsoid, obtained as the projection of the former 3-D ellipsoid. This data is processed in two ways: first, as a 3-D dynamic ellipsoid reconstruction problem; second, each view is considered as a 2-D dynamic ellipse estimation problem and then the 3-D ejection fraction is obtained by combining the effective 2-D ejection fractions of each view. The approximating ellipsoids are reconstructed using a Rauch-Tung-Striebel smoothing filter, which produces an ejection fraction estimate that is more robust to noise since it is based on the entire data set; in contrast, traditional ejection fraction estimates are based only on true frames of data. Further, numerical studies of the sensitivity of this approach to unknown dynamics and projection geometry are presented, providing a rational basis for specifying system parameters. This investigation includes estimation of ejection fraction from both simulated and real data.  相似文献   

2.
The application of level set techniques to echocardiographic data is presented. This method allows semiautomatic segmentation of heart chambers, which regularizes the shapes and improves edge fidelity, especially in the presence of gaps, as is common in ultrasound data. The task of the study was to reconstruct left ventricular shape and to evaluate left ventricular volume. Data were acquired with a real-time three-dimensional (3-D) echocardiographic system. The method was applied directly in the three-dimensional domain and was based on a geometric-driven scheme. The numerical scheme for solving the proposed partial differential equation is borrowed from numerical methods for conservation law. Results refer to in vitro and human in vivo acquired 3-D + time echocardiographic data. Quantitative validation was performed on in vitro balloon phantoms. Clinical application of this segmentation technique is reported for 20 patient cases providing measures of left ventricular volumes and ejection fraction.  相似文献   

3.
Abnormal left ventricular function is a diagnostic indication of cardiac disease. Left ventricular function is commonly quantified by ejection fraction measurements. A novel approach for the determination of left ventricular ejection fraction from technetium-99m-methoxy isobutyl isonitrile multiple-gated radionuclide angiocardiography is presented. Data from 23 patients, symptomatic of cardiac disease, indicate that ejection fractions determined using the radionuclide technique correlate well with contrast X-ray single-plane cineangiography (r=0.83, p<10(-6)). Data from 14 of the patients indicate favorable correlation with technetium-99m-pertechnetate gated blood pool radionuclide angiocardiography (r=0.87, p<10(-4)).  相似文献   

4.
An approach to automated outlining the left ventricular contour and its bounded area in gated isotopic ventriculography is proposed. Its purpose is to determine the ejection fraction (EF), an important parameter for measuring cardiac function. The method uses a modified version of the fuzzy C-means (MFCM) algorithm and a labeling technique. The MFCM algorithm is applied to the end diastolic (ED) frame and then the (FCM) is applied to the remaining images in a “box” of interest. The MFCM generates a number of fuzzy clusters. Each cluster is a substructure of the heart (left ventricle, ...). A cluster validity index to estimate the optimum clusters number present in image data point is used. This index takes account of the homogeneity in each cluster and is connected to the geometrical property of data set. The labeling is only performed to achieve the detection process in the ED frame. Since the left ventricle (LV) cluster has the greatest area of the cardiac images sequence in ED phase, a framing operation is performed to obtain, automatically, the “box” enclosing the LV cluster. The EF assessed in 50 patients by the proposed method and a semi-automatic one, routinely used, are presented. A good correlation between the two methods EF values is obtained (R=0.93). The LV contour found has been judged very satisfactory by a team of trained clinicians  相似文献   

5.
A quantitative approach for correction of background counts is described for determination of the left ventricular ejection fraction from first-pass radionuclide angiocardiography. First, the method is investigated theoretically and numerically using a mathematical model. It is demonstrated that the ejection fraction can be estimated relatively well, even in a noisy situation. Second, the method is applied to the left ventricular time-activity curves from two different regions of interest, the carefully selected and the laxly selected, and these are compared to each other. Good agreement (correlation coefficient = 0.96) for 20 patients was obtained between the ejection fractions from the carefully selected region of interest and those from the laxly selected one.  相似文献   

6.
A statistical methodology is proposed to rank several estimation methods of a relevant clinical parameter when no gold standard is available. Based on a regression without truth method, the proposed approach was applied to rank eight methods without using any a priori information regarding the reliability of each method and its degree of automation. It was only based on a prior concerning the statistical distribution of the parameter of interest in the database. The ranking of the methods relies on figures of merit derived from the regression and computed using a bootstrap process. The methodology was applied to the estimation of the left ventricular ejection fraction derived from cardiac magnetic resonance images segmented using eight approaches with different degrees of automation: three segmentations were entirely manually performed and the others were variously automated. The ranking of methods was consistent with the expected performance of the estimation methods: the most accurate estimates of the ejection fraction were obtained using manual segmentations. The robustness of the ranking was demonstrated when at least three methods were compared. These results suggest that the proposed statistical approach might be helpful to assess the performance of estimation methods on clinical data for which no gold standard is available.  相似文献   

7.
We propose a principled framework for recursively segmenting deformable objects across a sequence of frames. We demonstrate the usefulness of this method on left ventricular segmentation across a cardiac cycle. The approach involves a technique for learning the system dynamics together with methods of particle-based smoothing as well as nonparametric belief propagation on a loopy graphical model capturing the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation of the boundary, and the boundary estimation involves incorporating curve evolution into recursive state estimation. By formulating the problem as one of state estimation, the segmentation at each particular time is based not only on the data observed at that instant, but also on predictions based on past and future boundary estimates. Although this paper focuses on left ventricle segmentation, the method generalizes to temporally segmenting any deformable object.  相似文献   

8.
In this work, a computer-based algorithm is proposed for the initial interpretation of human cardiac images. Reconstructed single photon emission computed tomography images are used to differentiate between subjects with normal value and abnormal value of ejection fraction. The method analyses pixel intensities that correspond to blood flow in the left ventricular region. The algorithm proceeds through three main stages: the initial stage does a pre-processing task to reduce noise as well as blur in the image. The second stage extracts features from the images. Classification is done in the final stage. The pre-processing stage consists of a de-noising part and a de-blurring part. Novel features are used for classification. Features are extracted as three different sets based on: the pixel intensity distribution in different regions, spatial relationship of pixels and multi-scale image information. Two supervised algorithms are proposed for classification: one algorithm is based on a threshold value computed from the features extracted from the training images and the other algorithm is based on sequential minimal optimization-based support vector machine approach. Experimental studies were performed on real cardiac SPECT images obtained from hospital. The result of classification has been verified by an expert nuclear medicine physician and by the ejection fraction value obtained from quantitative gated SPECT, the most widely used software package for quantifying gated SPECT images.  相似文献   

9.
A new algorithm for successive identification of seismic reflections is proposed. Generally, the algorithm can be viewed as a curve matching method for images with specific structure. However, in the paper, the algorithm works on seismic signals assembled to constitute an image in which the investigated reflections produce curves. In numerical examples, the authors work on signals assembled in CMP gathers. The key idea of the algorithm is to estimate the reflection curve parameters and the reflection coefficients along these curves by combining the multipulse technique and the generalized Radon transform. The multipulse technique is used for wavelet identification in each trace, and the generalized Radon transform is used to coordinate the wavelet identification between the individual traces. Furthermore, a stop criterion and a reflection validation procedure are presented. The stop criterion stops the reflection estimation when the actual estimated reflection is insignificant. The reflection validation procedure ensures that the estimated reflections follow the shape of the investigated reflection curves. The algorithm is successfully used in two numerical examples. One is based on a synthetic CMP gather, whereas the other is based on a real recorded CMP gather. Initially, the algorithm requires an estimate of the wavelet that can be performed by any wavelet estimation method.  相似文献   

10.
Experiments with constant ejection flow periods on the rabbit left ventricle suggest that left ventricular pressure can be described by a time varying three-element model consisting of elastance Ee(t), resistance R(t), and series-elastance Es(t). Previous experiments demonstrated the existence of a "deactivation effect" after the cessation of a constant ejection flow period, which could be described by a decrease of elastance Ee(t). This paper presents a simulation model based on findings of constant ejection flow experiments, and tested on measured pressure and volume data. The results show that when the model is fitted on one single beat, left ventricular pressure can satisfactorily be described by a three-element model without deactivation. However, the model does not predict isovolumic pressure at end-ejection volume. When isovolumic pressure has to be described by the model as well, introduction of deactivation is necessary. The quality of the model was further tested by fitting it to two beats with different ejection parameters. Deactivation again was necessary for a good fit. Only with a deactivation effect in the model, the component values found are close to the normal range found with CFP experiments in the rabbit left ventricles. From the simulation results it can be concluded that (at least for constant ejection flow periods) elastance, resistance, series-elastance, and deactivation effects all are necessary in describing (and predicting) left ventricular pressure.  相似文献   

11.
The relationship between peak isovolumic developed pressure (Pmax) and end-diastolic volume can indicate ventricular contractility. Therefore, we propose a practical method to estimate Pmax from the pressure curve of an ejecting contraction of left ventricle. For the estimation, we first considered the left ventricle a linear time varying hydromotive pressure source (HMP(t)) coupled in series with an internal impedance. To formulate the HMP(t) we Fourier analyzed isovolumic pressure curves obtained under various conditions in six dogs. Since the higher order harmonics were found to be very small, HMP(t) could be described simply as where Pd = end-diastolic pressure and w = 2/T in which T is duration of contraction. Finally, HMP(t) for ejecting contraction was estimated by fitting the equation to the isovolumic portions of the pressure curve of ejecting contractions. The estimated Pmax values correlated well with actually observed Pmax values (r = 0.951, N = 24). We conclude that the proposed technique can be used to estimate Pmax from a single ejecting beat.  相似文献   

12.
Two-dimensional ultrasound sector scans of the left ventricle (LV) are commonly used to diagnose cardiac mechanical function. Present quantification procedures of wall motion by this technique entail inaccuracies, mainly due to relatively poor image quality and the absence of a definition of the relative position of the probe and the heart. The poor quality dictates subjective determination of the myocardial edges, while the absence of a position vector increases the errors in the calculations of wall displacement, LV blood volume, and ejection fraction. An improved procedure is proposed here for automatic myocardial border tracking (AMBT) of the endocardial and epicardial edges in a sequence of video images. The procedure includes nonlinear filtering of whole images, debiasing of gray levels, and location-dependent contrast stretching. The AMBT algorithm is based upon tracking movement of a small number of predefined set of points, which are manually defined on the two myocardial borders. Information from one image is used, by utilizing predetermined statistical criteria to iteratively search and detect the border points on the next one. Border contours are reconstructed by Spline interpolation of the border points. The AMBT procedure is tested by comparing processed sequences of cine echocardiographic scan images to manual tracings by an objective observer and to results from previously published data.  相似文献   

13.
This paper deals with adaptive sparse approximations of time-series. The work is based on a Bayesian specification of the shift-invariant sparse coding model. To learn approximations for a particular class of signals, two different learning strategies are discussed. The first method uses a gradient optimization technique commonly employed in sparse coding problems. The other method is novel in this context and is based on a sampling estimate. To approximate the gradient in the first approach we compare two Monte Carlo estimation techniques, Gibbs sampling and a novel importance sampling method. The second approach is based on a direct sample estimate and uses an extension of the Gibbs sampler used with the first approach. Both approaches allow the specification of different prior distributions and we here introduce a novel mixture prior based on a modified Rayleigh distribution. Experiments demonstrate that all Gibbs sampler based methods show comparable performance. The importance sampler was found to work nearly as well as the Gibbs sampler on smaller problems in terms of estimating the model parameters, however, the method performed substantially worse on estimating the sparse coefficients. For large problems we found that the combination of a subset selection heuristic with the Gibbs sampling approaches can outperform previous suggested methods. In addition, the methods studied here are flexible and allow the incorporation of additional prior knowledge, such as the nonnegativity of the approximation coefficients, which was found to offer additional benefits where applicable.  相似文献   

14.
A novel adaptive clipping technique for filtering a constant amplitude frequency modulated (FM) signal embedded in non-Gaussian noise is proposed. It is based on the analysis and processing of the estimate of probability density function of a FM signal realization. As a result, modifications of two robust estimators of FM signal amplitude are proposed. It is shown that these estimators can be used for Gaussian and non-Gaussian heavy-tail environments. The proposed clipping technique can exploit one or another obtained robust estimate of the signal amplitude for adaptive setting a threshold. Analysis of signal estimate accuracy for different noise environments is carried out. Comparative analysis of the obtained methods and known approaches based on scanning window nonlinear filtering and optimal robust L-DFT form is performed. It is demonstrated that the usage of clipping-based technique leads to the considerable improvement of the FM signal filtering efficiency in comparison to the aforementioned known approaches for different noise environments and a wide range of input SNR values.  相似文献   

15.
王斌  施朝健 《电子学报》2007,35(8):1527-1532
多边形近似是一种重要的曲线描述方法.研究用遗传算法求解平面数字曲线的多边形近似碰到的两个主要问题是不可行解难以处理和基本遗传算法局部搜索能力差.针对这两个问题,本文提出了一种组合拆分与合并技术的混合遗传算法(SMGA).它将两种经典算法-拆分技术与合并技术引入到对染色体的修复过程.采用这种方法,一个不可行解不仅能得到快速的修复,而且还能被推进到解空间中一个局部较优的位置.它的另外一个优点是:不同于已有的遗传算法,只能解决一类多边形近似问题,SMGA是一种能求解两类多边形近似问题的通用算法.实验结果表明:本文提出的算法比其他同类算法性能更优越.  相似文献   

16.
In this paper, we present two new methods for estimating two-dimensional (2-D) direction-of-arrival (DOA) of narrowband coherent (or highly correlated) signals using an L-shaped array of acoustic vector sensors. We decorrelate the coherency of the signals and reconstruct the signal subspace using cross-correlation matrix, and then the ESPRIT and propagator methods are applied to estimate the azimuth and elevation angles. The ESPRIT technique is based on the shift invariance property of array geometry and the propagator method is based on partitioning of the cross-correlation matrix. The propagator method is computationally efficient and requires only linear operations. Moreover, it does not require any eigendecomposition or singular-value decomposition as for the ESPRIT method. These two techniques are direct methods which do not require any 2-D iterative search for estimating the azimuth and the elevation angles. Simulation results are presented to demonstrate the performance of the proposed methods.  相似文献   

17.
仿射变换参数恢复轮廓类算法计算量小但一般无法 处理多部分组成 的目标,而区域类算法计算量大且对噪声敏感。本文利用中心投影将区域和轮廓类算法结合 ,首先由中心 投影将目标转化为闭曲线,保证了仿射变换关系具有不变性,再对所得闭曲线进行参数化, 最后利用参数 化后的闭曲线进行仿射变换参数的恢复。实验表明,所提方法可用于多部分组成的目标的参 数恢复,且有较好的参数恢复效果。  相似文献   

18.
夏桂松  何楚  孙洪 《电子与信息学报》2006,28(12):2209-2213
在研究传统的基于参数的合成孔径雷达(SAR)图像统计模型基础上,为了精确估计高分辨率SAR图像的统计分布,该文提出了一种结合基于核函数的非参数估计和马尔可夫上下文的SAR图像分割算法。该算法首先采用基于核函数的非参数方法估计SAR图像的统计分布,然后将此统计量作为图像分割的似然函数,利用马尔可夫上下文约束进行SAR图像分割。该文通过软件仿真对新算法和基于参数的统计模型的算法的效果进行了比较。研究发现,基于核函数的非参数估计方法仅仅依赖实际数据,在无法准确获取分布函数解析式的情况下往往具有更好的效果。实验证明,基于核函数的非参数估计方法对高分辨率SAR图像中较为复杂的场景如城区的提取取得了更为满意的结果。  相似文献   

19.
罗婷  李孟飞  赵云松 《电子学报》2018,46(11):2580-2587
在X射线计算机断层(CT)成像领域,多种应用需要准确的X射线能谱信息,包括能谱CT图像重建、CT图像硬化校正、CT图像的定量分析等.然而,由于CT系统中X光机发出的射线流强很大,X射线能谱一般难以直接测量,更为常用的方法是利用不同厚度模体的投影数据,间接估计能谱.该类方法将能谱估计问题转化为一组病态线性方程组的求解问题.为了获得较为准确的X射线能谱,通常需要测量多组投影数据,工作量大.针对该问题,本文提出一种新的X射线能谱估计方法.该方法利用二次有理分式拟合多能投影曲线,然后利用多能投影曲线上的采样点间接估计X射线能谱.由于二次有理分式参数少,因此该方法仅需少量的测量点即可拟合出高精度的多能投影曲线.实验表明:本文方法仅需要3个测量点就可以达到传统方法使用十几个测量点估计的X射线能谱的精度,显著减少了采样数量,进而减少能谱估计的工作量.仿真和实采数据都验证了方法的有效性.  相似文献   

20.
一种零前缀OFDM系统的符号同步和载频估计算法   总被引:1,自引:1,他引:0  
提出一种在零前缀正交频分复用(ZP-OFDM)系统中,估计符号定时和载波频偏的算法.算法根据ZP-OFDM系统的零前缀特性,利用双滑动窗口检测能量的分布来完成符号定时同步;同时,给出一种改进的平均法估计小数倍频率偏移.通过分析和仿真可以看出,无论是在白高斯噪声信道还是多径信道,具有一个训练符号的ZP-OFDM系统可以准确而有效的估计符号定时和频率偏差.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号