首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Examination of the tear-film lipid layer is often helpful in the prognosis of prospective contact lens patients and contact lens related problems, and in the analysis of symptomatic noncontact lens-wearing patients. In particular, the thickness of the lipid layer is considered to be an informative cue in studying the tear-film stability and uncovering of certain disorders. We propose a method for the accurate estimation of the lipid-layer thickness, exploiting the intensity and color information in Fizeau fringe images. The technique is based on a quantitative measure for discriminating among the spectra associated with different thicknesses. We propose an optical system for imaging the interference patterns, develop a mathematical model based on the physics of the fringe formation and sensing, and describe the calibration of the optical system using this model. The thickness extraction is readily carried out utilizing a lookup table. The proposed method would enable objective evaluation of the lipid layer characteristics, and provide a means for examining the dynamic changes in its thickness and spatial distribution during inter-blink periods.  相似文献   

2.
This paper addresses the problem of correlation estimation in sets of compressed images. We consider a framework where the images are represented under the form of linear measurements due to low complexity sensing or security requirements. We assume that the images are correlated through the displacement of visual objects due to motion or viewpoint change and the correlation is effectively represented by optical flow or motion field models. The correlation is estimated in the compressed domain by jointly processing the linear measurements. We first show that the correlated images can be efficiently related using a linear operator. Using this linear relationship we then describe the dependencies between images in the compressed domain. We further cast a regularized optimization problem where the correlation is estimated in order to satisfy both data consistency and motion smoothness objectives with a Graph Cut algorithm. We analyze in detail the correlation estimation performance and quantify the penalty due to image compression. Extensive experiments in stereo and video imaging applications show that our novel solution stays competitive with methods that implement complex image reconstruction steps prior to correlation estimation. We finally use the estimated correlation in a novel joint image reconstruction scheme that is based on an optimization problem with sparsity priors on the reconstructed images. Additional experiments show that our correlation estimation algorithm leads to an effective reconstruction of pairs of images in distributed image coding schemes that outperform independent reconstruction algorithms by 2–4 dB.  相似文献   

3.
Sidiropoulos et al. (1994) demonstrated that morphological openings and closings can be viewed as maximum a posteriori (MAP) estimators of morphologically smooth signals in signal-independent i.i.d. noise. The present authors extend these results to the M-fold independent observation case, and show that the aforementioned estimators are strongly consistent. We also demonstrate the validity of a thresholding conjecture (Sidiropoulos et al., 1994) by simulation, and use it to evaluate estimator performance. Taken together, these results can help determine the least upper bound, M , on M, which guarantees virtually error-free reconstruction of morphologically smooth images.  相似文献   

4.
The problem of determining font metrics from measurements on images of typeset text is discussed, and least-squares procedures for font metric estimation are developed. When it is shown that kerning is not present, sidebearing estimation involves solving a set of linear equations, called the sidebearing normal equations. More generally, simultaneous sidebearing and kerning term estimation involves an iterative procedure in which a modified set of sidebearing normal equations is solved during each iteration. Character depth estimates are obtained by solving a set of baseline normal equations. In a preliminary evaluation of the proposed procedures on scanned text images in three fonts, the root-mean-square set width estimation error was about 0.2 pixel. An application of font metric estimation to text image editing is discussed.  相似文献   

5.
The authors present a new illuminant-tilt-angle estimator that works with isotropic textures. It has been developed from Kube and Pentland's (1988) frequency-domain model of images of three-dimensional texture, and is compared with Knill's (1990) spatial-domain estimator. The frequency and spatial-domain theory behind each of the estimators is related via an alternative proof of the basic phenomena that both estimators exploit: that is that the variance of the partial derivative of the image is at a maximum when the partial derivative is taken in the direction of the illuminant's tilt. Results obtained using both simulated and real textures suggest that the frequency-domain estimator is more accurate  相似文献   

6.
Motion estimation from tagged MR image sequences   总被引:1,自引:0,他引:1  
A method for reconstructing motion from sequences of tagged magnetic resonance (MR) images is presented. MR tagging is used to create a spatial pattern of varying magnetization so that objects which may otherwise have constant intensity are textured, which reduces the motion ambiguity associated with the aperture problem in computer vision. To compensate for the decay of the tag pattern, a new optical flow algorithm is developed and implemented. The resulting estimated velocity field is then used to recursively update the implied motion reference map over time, thereby tracking the motion of individual particles. If a segmentation of the object is known at the time the tag pattern is created, then an object may be selectively tracked, using the estimated reference map to update the object's position as time progresses. Results are shown for both simulated and actual MR phantom data.  相似文献   

7.
Segmentation of echocardiographic images using mathematical morphology   总被引:1,自引:0,他引:1  
A semiautomatic technique for isolating the ventricular endocardial border in echocardiograms from a commercially available two-dimensional phased array ultrasound system is presented. This method processes echo images using mathematical morphology to reduce the effects of range and azimuth variation inherent in echo. After morphological filtering, the endocardial border is extracted with traditional segmentation methods. Further processing of the resulting border using binary morphology produces a region of interest suitable for derivation of motion parameters of the endocardium. The area and the shape of semiautomatically-derived regions correlate well (r>0.93) with those defined by expert observers in a study of induced ischemia in seven canines.<>  相似文献   

8.
9.
A computational method is reported which allows the fully automated recovery of the three-dimensional shape of the cardiac left ventricle from a reduced set of apical echo views. Two typically ill-posed problems have been faced: 1) the detection of the left ventricle contours in each view, and 2) the integration of the detected contour points (which form a sparse and partially inconsistent data set) into a single surface representation. The authors' solution to these problems is based on a careful integration of standard computer vision algorithms with neural networks. Boundary detection comprises three steps: edge detection, edge grouping, and edge classification. The first and second steps (which are typical early-vision tasks not involving specific domain-knowledge) have been performed through fast, well-established algorithms of computer vision. The higher level task of left ventricle-edge discrimination, which involves the exploitation of specific knowledge about the left ventricle silhouette, has been performed by feedforward neural networks. Following the most recent results in the field of computer vision, the first step in solving the problem of recovering the ventricle surface has been the adoption of a physically inspired model of it. Basically, the authors have modeled the left ventricle surface as a closed, thin, elastic surface and the data as a set of radial springs acting on it. The recovery process is equivalent to the settling of the surface-plus-springs system into a stable configuration of minimum potential energy. The finite element discretization of this model leads directly to an analog neural-network implementation. The efficiency of such an implementation has been remarkably enhanced through a learning algorithm which embeds specific knowledge about the shape of the left ventricle in the network. Experiments using clinical echographic sequences are described. Four apical views (each with a different rotation of the probe) have been acquired during a heartbeat from a set of seven normal subjects. These images have been utilized to set the various processing modules and test their capabilities.  相似文献   

10.
Two-dimensional ultrasound sector scans of the left ventricle (LV) are commonly used to diagnose cardiac mechanical function. Present quantification procedures of wall motion by this technique entail inaccuracies, mainly due to relatively poor image quality and the absence of a definition of the relative position of the probe and the heart. The poor quality dictates subjective determination of the myocardial edges, while the absence of a position vector increases the errors in the calculations of wall displacement, LV blood volume, and ejection fraction. An improved procedure is proposed here for automatic myocardial border tracking (AMBT) of the endocardial and epicardial edges in a sequence of video images. The procedure includes nonlinear filtering of whole images, debiasing of gray levels, and location-dependent contrast stretching. The AMBT algorithm is based upon tracking movement of a small number of predefined set of points, which are manually defined on the two myocardial borders. Information from one image is used, by utilizing predetermined statistical criteria to iteratively search and detect the border points on the next one. Border contours are reconstructed by Spline interpolation of the border points. The AMBT procedure is tested by comparing processed sequences of cine echocardiographic scan images to manual tracings by an objective observer and to results from previously published data.  相似文献   

11.
人脸图像年龄估计具有重要的理论研究意义和潜在的应用价值。提出的人脸图像年龄估计系统主要包括图像预处理,特征提取,学习模型的建立和模型预测等四个功能模块。系统首先采用灰度变换,几何归一化和直方图均衡等图像预处理技术来得到所需的输入图像,接着选用FG-NET数据集,采用卷积神经网络来进行特征提取,采用支持向量机模型进行学习模型的构建,最后输入新样本到已建立模型中进行年龄段分类和年龄估计。仿真实验表明,系统的正确率为88.14%,年龄总误差为5.3,具有识别率高,应用性强的优点。  相似文献   

12.
Real-time 3-D echocardiography opens up the possibility of interactive, fast 3-D analysis of cardiac anatomy and function. However, at the present time its quantitative power cannot be fully exploited due to the limited quality of the images. In this paper, we present an algorithm to register apical and parasternal echocardiographic datasets that uses a new similarity measure, based on local orientation and phase differences. By using phase and orientation to guide registration, the effect of artifacts intrinsic to ultrasound images is minimized. The presented method is fully automatic except for initialization. The accuracy of the method was validated qualitatively, resulting in 85% of the cardiac segments estimated having a registration error smaller than 2 mm, and no segments with an error larger than 5 mm. Robustness with respect to landmark initialization was validated quantitatively, with average errors smaller than 0.2 mm and 0.5 degrees for initialization landmarks rotations of up to 15 degrees and translations of up to 10 mm.  相似文献   

13.
A zero-hit run-length probability model for image statistics is derived. The statistics are based on the lengths of runs of pixels that do not include any part of objects that define a scene model. The statistics are used to estimate the density and size of the discrete objects (modeled as disks) from images when the image pixel size is significant relative to the object size. Using different combinations of disk size, density, and image resolution (pixel size) in simulated images, parameter estimation may be used to investigate the essential invertibility of object size and density. Analysis of the relative errors and 95% confidence intervals indicates the accuracy and reliability of the estimates. An integrated parameter r, reveals relationships between errors and the combinations of the three basic parameters of object size, density, and pixel size. The method may be used to analyze real remotely sensed images if simplifying assumptions are relaxed to include the greater complexity found in real data  相似文献   

14.
Multiview image sequence processing has been the focus of considerable attention in recent literature. This paper presents an efficient technique for object-based rigid and non-rigid 3D motion estimation, applicable to problems occurring in multiview image sequence coding applications. More specifically, a neural network is formed for the estimation of the rigid 3D motion of each object in the scene, using initially estimated 2D motion vectors corresponding to each camera view. Non-linear error minimization techniques are adopted for neural network weight update. Furthermore, a novel technique is also proposed for the estimation of the local non-rigid deformations, based on the multiview camera geometry. Experimental results using both stereoscopic and trinocular camera setups illustrate and evaluate the proposed scheme.  相似文献   

15.
An automatic method for identifying the location of the papillary muscles in two-dimensional (2-D) short-axis echocardiographic images is described. The technique uses both spatial and temporal information to identify the presence and track the location of the muscles in the left ventricle from end-diastole to end-systole. The three main steps of the method are spatial preprocessing, spatial processing, and temporal processing. The spatial preprocessing step includes a region of search estimation. The spatial processing step includes a papillary muscle existence test and an initial approximation of the papillary muscle points. The temporal processing includes motion-pattern evaluation and final papillary muscle location, The estimates of existence and position for the automatic method were compared with estimates made by an independent expert observer. Two hundred and ten frames, three taken from each of 70 image sequences, were evaluated. Since two regions of search were processed for each frame (one for the posterior-inferior and one for the anterior-lateral papillary muscle), a total of 420 approximations were made. Of this total, 340 automatic estimates were judged to be in close agreement with estimates made by the expert. Of the remaining 80 approximations, 54 estimates were made by the expert when the computer determined that no papillary muscle was present, 17 estimates provided poor results, and nine estimates were made by the computer when the observer concluded that no papillary muscle was present  相似文献   

16.
Estimating the covariance sequence of a wide-sense stationary process is of fundamental importance in digital signal processing (DSP). A new method, which makes use of Fourier inversion of the Capon spectral estimates and is referred to as theCapon method, is presented in this paper. It is shown that the Capon power spectral density (PSD) estimator yields an equivalent autoregressive (AR) or autoregressive moving-average (ARMA) process; hence, theexact covariance sequence corresponsing to the Capon spectrum can be computed in a rather convenient way. Also, without much accuracy loss, the computation can be significantly reduced via an approximate Capon method that utilizes the fast Fourier transform (FFT). Using a variety of ARMA signals, we show that Capon covariance estimates are generally better than standard sample covariance estimates and can be used to improve performances in DSP applications that are critically dependent on the accuracy of the covariance sequence estimates.This work was supported in part by National Science Foundation Grant MIP-9308302, Advanced Research Project Agency Grant MDA-972-93-1-0015, the Senior Individual Grant Program of the Swedish Foundation for Strategic Research and the Swedish Research Council for Engineering Sciences (TFR).  相似文献   

17.
席林  孙韶媛  李琳娜  邹芳喻 《激光与红外》2012,42(11):1311-1315
提出一种通过非线性学习模型来估计单目红外图像深度的算法。该算法首先通过逐步线性回归和独立成分分析(ICA)寻找对于红外图像深度相关性较强的特征,然后以具有核函数的非线性支持向量机(SVM)为模型基础,采用监督学习的方法对红外图像深度特征进行回归分析并训练,在训练过程中通过已知数据回归后的最小均方误差对模型参数进行修正,训练后的模型可对单目红外图像的深度分布进行估计。实验结果证明,利用该模型能较一致地估计单目红外图像的深度信息。  相似文献   

18.
A method for making a contiguous series of blood vessel diameter estimates from digitized images is proposed. It makes use of a vessel intensity profile model based on the vessel geometry and the physics of the imaging process, providing estimates of far greater accuracy than previously obtained. A variety of techniques are used to reduce the computational demand. The method includes the generation of measurement estimation error, which is important in determining total vessel patency as well as providing a basic measure of diameter estimate accuracy.  相似文献   

19.
In order to interpret ultrasound images, it is important to understand their formation and the properties that affect them, especially speckle noise. This image texture, or speckle, is a correlated and multiplicative noise that inherently occurs in all types of coherent imaging systems. Indeed, its statistics depend on the density and on the type of scatterers in the tissues. This paper presents a new method for echocardiographic images segmentation in a variational level set framework. A partial differential equation-based flow is designed locally in order to achieve a maximum likelihood segmentation of the region of interest. A Rayleigh probability distribution is considered to model the local B-mode ultrasound images intensities. In order to confront more the speckle noise and local changes of intensity, the proposed local region term is combined with a local phase-based geodesic active contours term. Comparison results on natural and simulated images show that the proposed model is robust to attenuations and captures well the low-contrast boundaries.  相似文献   

20.
A technique that improves precision in classification results using information extracted from video features is introduced. It combines fuzzy rule-based classification with the detection of changing regions in outdoor scenes. The approach is invariant to extreme illumination changes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号