首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The quantitative estimation of regional cardiac deformation from three-dimensional (3-D) image sequences has important clinical implications for the assessment of viability in the heart wall. We present here a generic methodology for estimating soft tissue deformation which integrates image-derived information with biomechanical models, and apply it to the problem of cardiac deformation estimation. The method is image modality independent. The images are segmented interactively and then initial correspondence is established using a shape-tracking approach. A dense motion field is then estimated using a transversely isotropic, linear-elastic model, which accounts for the muscle fiber directions in the left ventricle. The dense motion field is in turn used to calculate the deformation of the heart wall in terms of strain in cardiac specific directions. The strains obtained using this approach in open-chest dogs before and after coronary occlusion, exhibit a high correlation with strains produced in the same animals using implanted markers. Further, they show good agreement with previously published results in the literature. This proposed method provides quantitative regional 3-D estimates of heart deformation.  相似文献   

2.
It is shown, based on an expression for the received pressure field in pulsed medical ultrasound systems, that a common one-dimensional pulse can be estimated from individual A-lines. An autoregressive moving average (ARMA) model is suggested for the pulse, and an estimator based on the prediction error method is derived. The estimator is used on a segment of an A-line, assuming that the pulse does not change significantly inside the segment. Several examples of the use of the estimator on synthetic data measured from a tissue phantom and in vitro data measured from a calf's liver are given. They show that a pulse can be estimated even at moderate signal-to-noise ratios  相似文献   

3.
党宏社  白梅  张娜 《电视技术》2015,39(19):10-13
为对自然图像有效准确地分类,提出了一种对图像低层特征和KNN分类算法中的近邻样本分别进行加权的分类方法。针对不同类别图像的视觉特征的差异,通过ReliefF算法计算训练集中每个类别的特征权值,利用此权值来改进待测图像与训练集中图像的距离度量;按照不同近邻到待测样本的距离远近,为不同近邻赋予权值来改进KNN算法在类别决策上的不足。实验结果表明该方法较传统KNN和特征加权KNN方法,准确性提高且对不同K值具有良好的鲁棒性。  相似文献   

4.
In this paper, we propose and test a new iterative algorithm to simultaneously estimate the nonrigid motion vector fields and the emission images for a complete cardiac cycle in gated cardiac emission tomography. We model the myocardium as an elastic material whose motion does not generate large amounts of strain. As a result, our method is based on minimizing an objective function consisting of the negative logarithm of a maximum likelihood image reconstruction term, the standard biomechanical model of strain energy, and an image matching term that ensures a measure of agreement of intensities between frames. Simulations are obtained using data for the four-dimensional (4-D) NCAT phantom. The data models realistic noise levels in a typical gated myocardial perfusion SPECT study. We show that our simultaneous algorithm produces images with improved spatial resolution characteristics and noise properties compared with those obtained from postsmoothed 4-D maximum likelihood methods. The simulations also demonstrate improved motion estimates over motion estimation using independently reconstructed images.  相似文献   

5.
In this article developments and performance analysis of image matching for detailed surface reconstruction of heritage objects is discussed. Three dimensional image-based modeling of heritages is a very interesting topic with many possible applications. In this article we propose a multistage image-based modeling approach that requires only a limited amount of human interactivity and is capable of capturing the fine geometric details with similar accuracy as close-range active range sensors. It can also cope with wide baselines using several advancements over standard stereo matching techniques. Our approach is sequential, starting from a sparse basic segmented model created with a small number of interactively measured points. This model, specifically the equation of each surface, is then used as a guide to automatically add the fine details. The following three techniques are used, each where best suited, to retrieve the details: 1) for regularly shaped patches such as planes, cylinders, or quadrics, we apply a fast relative stereo matching technique. 2) For more complex or irregular segments with unknown shape, we use a global multi-image geometrically constrained technique. 3) For segments unsuited for stereo matching, we employ depth from shading (DFS). The goal is not the development of a fully automated procedure for 3D object reconstruction from image data or a sparse stereo approach, but we aim at the digital reconstruction of detailed and accurate surfaces from calibrated and oriented images for practical daily documentation and digital conservation of wide variety of heritage objects.  相似文献   

6.
A multiresolution analysis of digital gray-level images is presented. A gray-level multi-scale framework is determined from two main assumptions: the gray scale is binary at the finest spatial resolution, and the gray levels of composed regions are obtained additively. In order to interrelate the gray-level histograms of the same image at different resolutions, probabilistic linear models are developed, which are then applied for estimation. Linear-optimization theory is used as a way of constructing such models. A general procedure for image processing is sketched, based on gray-level estimation. A versatile algorithm for nonlinear filtering is derived. Some examples of prospective applications are given.This work was partially supported by grant TIC91-646 from the DGYCIT of the Spanish Government.  相似文献   

7.
For pt.I see ibid., vol.26, no.4, p.463-73, July 1988. The variogram function used in geostatistical analysis is a useful statistic in the analysis of remotely sensed images. Using the results derived in Part I, the basic second-order, or covariance, properties of scenes modeled by simple disks of varying size and spacing after imaging into disk-shaped pixels are analyzed to explore the relationship between the image variograms and discrete object scene structure. The models provide insight into the nature of real images of the Earth's surface and the tools for a complete analysis of the more complex case of three-dimensional illuminated discrete-object images  相似文献   

8.
The major goal of this paper is to help detect breast cancer early based on infrared images. Some procedures, protocols and numerical simulations were developed or performed. Two different issues are presented. The first is the development of a standardized protocol for the acquisition of breast thermal images including the design, construction and installation of mechanical apparatus. The second part is related to the greatest difficulty for the numerical computation of breast temperature profiles that is caused by the uncertainty of the real values of the thermophysical parameters of some tissues. Then, a methodology for estimating thermal properties based on these infrared images is presented. The commercial software FLUENTTM was used for the numerical simulation. A Sequential Quadratic Programming (SQP) method was used to solve the inverse problem and to estimate the thermal conductivity and blood perfusion of breast tissues. The results showed that it is possible to estimate the thermophysical properties using the thermography. The next stage will be to use the geometry of a real breast for the numerical simulation in conjunction with a linear mapping of the temperatures measured over the breast volume.  相似文献   

9.
This paper addresses object tracking in ultrasound images using a robust multiple model tracker. The proposed tracker has the following features: 1) it uses multiple dynamic models to track the evolution of the object boundary, and 2) it models invalid observations (outliers), reducing their influence on the shape estimates. The problem considered in this paper is the tracking of the left ventricle which is known to be a challenging problem. The heart motion presents two phases (diastole and systole) with different dynamics, the multiple models used in this tracker try to solve this difficulty. In addition, ultrasound images are corrupted by strong multiplicative noise which prevents the use of standard deformable models. Robust estimation techniques are used to address this difficulty. The multiple model data association (MMDA) tracker proposed in this paper is based on a bank of nonlinear filters, organized in a tree structure. The algorithm determines which model is active at each instant of time and updates its state by propagating the probability distribution, using robust estimation techniques.  相似文献   

10.
Detecting targets occluded by foliage in foliage-penetrating (FOPEN) ultra-wideband synthetic aperture radar (UWB SAR) images is an important and challenging problem. Given the different nature of target returns in foliage and nonfoliage regions and very low signal-to-clutter ratio in UWB imagery, conventional detection algorithms fail to yield robust target detection results. A new target detection algorithm is proposed that (1) incorporates symmetric alpha-stable (SalphaS) distributions for accurate clutter modeling, (2) constructs a two-dimensional (2-D) site model for deriving local context, and (3) exploits the site model for region-adaptive target detection. Theoretical and empirical evidence is given to support the use of the SalphaS model for image segmentation and constant false alarm rate (CFAR) detection. Results of our algorithm on real FOPEN images collected by the Army Research Laboratory are provided.  相似文献   

11.
Presents a new algorithm for the robust and accurate tracking of the aorta in cardiovascular magnetic resonance (MR) images. First, a rough estimate of the location and diameter of the aorta is obtained by applying a multiscale medial-response function using the available a priori knowledge. Then, this estimate is refined using an energy-minimizing deformable model which the authors define in a Markov-random-field (MRF) framework. In this context, the authors propose a global minimization technique based on stochastic relaxation. Simulated annealing (SA), which is shown to be superior to other minimization techniques, for minimizing the energy of the deformable model. The authors have evaluated the performance and robustness of the algorithm on clinical compliance studies in cardiovascular MR images. The segmentation and tracking has been successfully tested in spin-echo MR images of the aorta. The results show the ability of the algorithm to produce not only accurate, but also very reliable results in clinical routine applications  相似文献   

12.
The purpose of this work is to develop patient-specific models for automatically detecting lung nodules in computed tomography (CT) images. It is motivated by significant developments in CT scanner technology and the burden that lung cancer screening and surveillance imposes on radiologists. We propose a new method that uses a patient's baseline image data to assist in the segmentation of subsequent images so that changes in size and/or shape of nodules can be measured automatically. The system uses a generic, a priori model to detect candidate nodules on the baseline scan of a previously unseen patient. A user then confirms or rejects nodule candidates to establish baseline results. For analysis of follow-up scans of that particular patient, a patient-specific model is derived from these baseline results. This model describes expected features (location, volume and shape) of previously segmented nodules so that the system can relocalize them automatically on follow-up. On the baseline scans of 17 subjects, a radiologist identified a total of 36 nodules, of which 31 (86%) were detected automatically by the system with an average of 11 false positives (FPs) per case. In follow-up scans 27 of the 31 nodules were still present and, using patient-specific models, 22 (81%) were correctly relocalized by the system. The system automatically detected 16 out of a possible 20 (80%) of new nodules on follow-up scans with ten FPs per case.  相似文献   

13.
14.
Automated analysis of nerve-cell images using active contour models   总被引:2,自引:0,他引:2  
The number of nerve fibers (axons) in a nerve, the axon size, and shape can all be important neuroanatomical features in understanding different aspects of nerves in the brain. However, the number of axons in a nerve is typically in the order of tens of thousands and a study of a particular aspect of the nerve often involves many nerves. Potentially meaningful studies are often prohibited by the huge number involved when manual measurements have to be employed. A method that automates the analysis of axons from electron-micrographic images is presented. It begins with a rough identification of all the axon centers by use of an elliptical Hough transform procedure. Boundaries of each axons are then extracted based on active contour model, or snakes, approach where physical properties of the axons and the given image data are used in an optimization scheme to guide the snakes to converge to axon boundaries for accurate sheath measurement. However, false axon detection is still common due to poor image quality and the presence of other irrelevant cell features, thus a conflict resolution scheme is developed to eliminate false axons to further improve the performance of detection. The developed method has been tested on a number of nerve images and its results are presented.  相似文献   

15.
长曝光大气湍流退化图像点扩散函数估计   总被引:2,自引:1,他引:2  
大气湍流能明显降低光学系统的成像质量,距离目标越远,曝光时间越长,受大气扰动越严重,图像越模糊。利用大气湍流退化点扩散函数可以对模糊图像进行复原,但实际自然条件下的点扩散函数往往难以准确获得。结合课题研究背景,针对长曝光大气湍流退化图像复原提出了近似等腰三角形模型,通过该模型能得到准确的大气湍流点扩散函数,并采用维纳滤波获得清晰复原图像。实验表明该方法能够对大视场、远距离条件下获得的长曝光大气湍流退化自然图像估计出准确的点扩散函数,复原图像拥有较好的视觉效果,通过计算灰度平均梯度值和拉普拉斯梯度模两个客观评价标准,进一步证实了该算法的有效性。  相似文献   

16.
Tooth segmentation of dental study models using range images   总被引:6,自引:0,他引:6  
The accurate segmentation of the teeth from the digitized representation of a dental study model is an important component in computer-based algorithms for orthodontic feature detection and measurement and in the simulation of orthodontic procedures such as tooth rearrangement. This paper presents an automated method for tooth segmentation from the three-dimensional (3-D) digitized image captured by a laser scanner. We avoid the complexity of directly processing 3-D mesh data by proposing the innovative idea of detecting features on two range images computed from the 3-D image. The dental arch is first obtained from the plan-view range image. Using the arch as the reference, a panoramic range image of the dental model can be computed. The interstices between the teeth are detected separately in the two range images, and results from both views are combined for a determination of interstice locations and orientations. Finally, the teeth are separated from the gums by delineating the gum margin. The algorithm was tested on 34 dental models representing a variety of malocclusions and was found to be robust and accurate.  相似文献   

17.
Monte Carlo techniques for estimating various network reliability characteristics, including terminal connectivity, are developed by assuming that edges are subject to failures with arbitrary probabilities and nodes are absolutely reliable. The core of the approach is introducing network time-evolution processes and using certain graph-theoretic machinery, resulting in a considerable increase in accuracy for Monte Carlo estimates, especially for highly reliable networks. Simulation strategies and numerical results are presented and discussed  相似文献   

18.
A Bayesian formulation is proposed for reliable and robust extraction of the directional field in fingerprint images using a class of spatially smooth priors. The spatial smoothness allows for robust directional field estimation in the presence of moderate noise levels. Parametric template models are suggested as candidate singularity models for singularity detection. The parametric models enable joint extraction of the directional field and the singularities in fingerprint impressions by dynamic updating of feature information. This allows for the detection of singularities that may have previously been missed, as well as better aligning the directional field around detected singularities. A criteria is presented for selecting an optimal block size to reduce the number of spurious singularity detections. The best rates of spurious detection and missed singularities given by the algorithm are 4.9% and 7.1%, respectively, based on the NIST 4 database.  相似文献   

19.
Operational rate-distortion (RD) functions of most natural images, when compressed with state-of-the-art wavelet coders, exhibit a power-law behavior D alpha R(-gamma) at moderately high rates, with gamma being a constant depending on the input image, deviating from the well-known exponential form of the RD function D alpha 2(-xiR) for bandlimited stationary processes. This paper explains this intriguing observation by investigating theoretical and operational RD behavior of natural images. We take as our source model the fractional Brownian motion (fBm), which is often used to model nonstationary behaviors in natural images. We first establish that the theoretical RD function of the fBm process (both in 1-D and 2-D) indeed follows a power law. Then we derive operational RD function of the fBm process when wavelet encoded based on water-filling principle. Interestingly, both the operational and theoretical RD functions behave as D alpha R(-gamma). For natural images, the values of gamma are found to be distributed around 1. These results lend an information theoretical support to the merit of multiresolution wavelet compression of self-similar processes and, in particular, natural images that can be modelled by such processes. They may also prove useful in predicting performance of RD optimized image coders.  相似文献   

20.
This paper presents a fully Bayesian approach to analyze finite generalized Gaussian mixture models which incorporate several standard mixtures, widely used in signal and image processing applications, such as Laplace and Gaussian. Our work is motivated by the fact that the generalized Gaussian distribution (GGD) can be applied on a wide range of data due to its shape flexibility which justifies its usefulness to model the statistical behavior of multimedia signals [1]. We present a method to evaluate the posterior distribution and Bayes estimators using a Gibbs sampling algorithm. For the selection of number of components in the mixture, we use the integrated likelihood and Bayesian information criteria. We validate the proposed method by applying it to: synthetic data, real datasets, texture classification and retrieval, and image segmentation; while comparing it to different other approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号