首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 29 毫秒
1.
2.
Magnetic resonance imaging (MRI) uses applied spatial variations in the magnetic field to encode spatial position. Therefore, nonuniformities in the main magnetic field can cause image distortions. In order to correct the image distortions, it is desirable to simultaneously acquire data with a field map in registration. We propose a joint estimation (JE) framework with a fast, noniterative approach using harmonic retrieval (HR) methods, combined with a multi-echo echo-planar imaging (EPI) acquisition. The connection with HR establishes an elegant framework to solve the JE problem through a sequence of 1-D HR problems in which efficient solutions are available. We also derive the condition on the smoothness of the field map in order for HR techniques to recover the image with high signal-to-noise ratio. Compared to other dynamic field mapping methods, this method is not constrained by the absolute level of the field inhomogeneity over the slice, but relies on a generous pixel-to-pixel smoothness. Moreover, this method can recover image, field map, and T2* map simultaneously.   相似文献   

3.
Current techniques for segmenting macular optical coherence tomography (OCT) images have been 2-D in nature. Furthermore, commercially available OCT systems have only focused on segmenting a single layer of the retina, even though each intraretinal layer may be affected differently by disease. We report an automated approach for segmenting (anisotropic) 3-D macular OCT scans into five layers. Each macular OCT dataset consisted of six linear radial scans centered at the fovea. The six surfaces defining the five layers were identified on each 3-D composite image by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients (24 3-D composite image datasets) with unilateral anterior ischemic optic neuropathy (AION). Using the average of three experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.1 $pm$ 2.9 $mu$m, a result comparable to the interobserver variability (6.9 $pm$ 3.3 $mu$m). Our quantitative analysis of the automated segmentation results from AION subject data revealed that the inner retinal layer thickness for the affected eye was 24.1 $mu$m (21%) smaller on average than for the unaffected eye $(p≪0.001)$, supporting the need for segmenting the layers separately.   相似文献   

4.
该文提出一种基于判别式聚类框架的非监督极化SAR图像分类算法,利用判别式监督分类技术实现非监督聚类。为实现该算法,定义了一个结合softmax回归模型和马尔科夫随机场光滑性约束的能量函数。该模型中,像素类标和分类器均为需要优化的未知变量。该算法从基于${H / {\bar \alpha }}$目标极化分解和K-Wishart极化统计分布而产生的初始化类标开始,交替迭代优化分类器和类标的能量函数,从而实现对分类器和类标的求解。真实极化SAR数据上的实验结果证明了该算法的有效性和先进性。  相似文献   

5.
Simulations provide a way of generating data where ground truth is known, enabling quantitative testing of image processing methods. In this paper, we present the construction of 20 realistic digital brain phantoms that can be used to simulate medical imaging data. The phantoms are made from 20 normal adults to take into account intersubject anatomical variabilities. Each digital brain phantom was created by registering and averaging four T1, T2, and proton density (PD)-weighted magnetic resonance imaging (MRI) scans from each subject. A fuzzy minimum distance classification was used to classify voxel intensities from T1, T2, and PD average volumes into grey-matter, white matter, cerebro-spinal fluid, and fat. Automatically generated mask volumes were required to separate brain from nonbrain structures and ten fuzzy tissue volumes were created: grey matter, white matter, cerebro-spinal fluid, skull, marrow within the bone, dura, fat, tissue around the fat, muscles, and skin/muscles. A fuzzy vessel class was also obtained from the segmentation of the magnetic resonance angiography scan of the subject. These eleven fuzzy volumes that describe the spatial distribution of anatomical tissues define the digital phantom, where voxel intensity is proportional to the fraction of tissue within the voxel. These fuzzy volumes can be used to drive simulators for different modalities including MRI, PET, or SPECT. These phantoms were used to construct 20 simulated T1-weighted MR scans. To evaluate the realism of these simulations, we propose two approaches to compare them to real data acquired with the same acquisition parameters. The first approach consists of comparing the intensities within the segmented classes in both real and simulated data. In the second approach, a whole brain voxel-wise comparison between simulations and real T1-weighted data is performed. The first comparison underlines that segmented classes appear to properly represent the anatomy on average, and that inside these classes, the simulated and real intensity values are quite similar. The second comparison enables the study of the regional variations with no a priori class. The experiments demonstrate that these variations are small when real data are corrected for intensity nonuniformity.  相似文献   

6.
This paper addresses the use of independent component analysis (ICA) for image compression. Our goal is to study the adequacy (for lossy transform compression) of bases learned from data using ICA. Since these bases are, in general, non-orthogonal, two methods are considered to obtain image representations: matching pursuit type algorithms and orthogonalization of the ICA bases followed by standard orthogonal projection.Several coder architectures are evaluated and compared, using both the usual SNR and a perceptual quality measure called picture quality scale. We consider four classes of images (natural, faces, fingerprints, and synthetic) to study the generalization and adaptation abilities of the data-dependent ICA bases. In this study, we have observed that: bases learned from natural images generalize well to other classes of images; bases learned from the other specific classes show good specialization. For example, for fingerprint images, our coders perform close to the special-purpose WSQ coder developed by the FBI. For some classes, the visual quality of the images obtained with our coders is similar to that obtained with JPEG2000, which is currently the state-of-the-art coder and much more sophisticated than a simple transform coder.We conclude that ICA provides a excellent tool for learning a coder for a specific image class, which can even be done using a single image from that class. This is an alternative to hand tailoring a coder for a given class (as was done, for example, in the WSQ for fingerprint images). Another conclusion is that a coder learned from natural images acts like an universal coder, that is, generalizes very well for a wide range of image classes.  相似文献   

7.
In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and/or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness.  相似文献   

8.
A new approach to regularization methods for image processing is introduced and developed using as a vehicle the problem of computing dense optical flow fields in an image sequence. The solution of the new problem formulation is computed with an efficient multiscale algorithm. Experiments on several image sequences demonstrate the substantial computational savings that can be achieved due to the fact that the algorithm is noniterative and in fact has a per pixel computational complexity that is independent of image size. The new approach also has a number of other important advantages. Specifically, multiresolution flow field estimates are available, allowing great flexibility in dealing with the tradeoff between resolution and accuracy. Multiscale error covariance information is also available, which is of considerable use in assessing the accuracy of the estimates. In particular, these error statistics can be used as the basis for a rational procedure for determining the spatially-varying optimal reconstruction resolution. Furthermore, if there are compelling reasons to insist upon a standard smoothness constraint, the new algorithm provides an excellent initialization for the iterative algorithms associated with the smoothness constraint problem formulation. Finally, the usefulness of the approach should extend to a wide variety of ill-posed inverse problems in which variational techniques seeking a "smooth" solution are generally used.  相似文献   

9.
Maps of local tissue compression or expansion are often computed by comparing magnetic resonance imaging (MRI) scans using nonlinear image registration. The resulting changes are commonly analyzed using tensor-based morphometry to make inferences about anatomical differences, often based on the Jacobian map, which estimates local tissue gain or loss. Here, we provide rigorous mathematical analyses of the Jacobian maps, and use themto motivate a new numerical method to construct unbiased nonlinear image registration. First, we argue that logarithmic transformation is crucial for analyzing Jacobian values representing morphometric differences. We then examine the statistical distributions of log-Jacobian maps by defining the Kullback-Leibler (KL) distance on material density functions arising in continuum-mechanical models. With this framework, unbiased image registration can be constructed by quantifying the symmetric KL-distance between the identity map and the resulting deformation. Implementation details, addressing the proposed unbiased registration as well as the minimization of symmetric image matching functionals, are then discussed and shown to be applicable to other registration methods, such as inverse consistent registration. In the results section, we test the proposed framework, as well as present an illustrative application mapping detailed 3-D brain changes in sequential magnetic resonance imaging scans of a patient diagnosed with semantic dementia. Using permutation tests, we show that the symmetrization of image registration statistically reduces skewness in the log-Jacobian map.  相似文献   

10.
A nonsmoothing approach to the estimation of vessel contours in angiograms   总被引:2,自引:0,他引:2  
Accurate and fully automatic assessment of vessel (stenoses) dimensions in angiographic images has been sought as a diagnostic tool, in particular for coronary heart disease. Here, the authors propose a new technique to estimate vessel borders in angiographic images, a necessary first step of any automatic analysis system. Unlike in previous approaches, the obtained edge estimates are not artificially smoothed; this is extremely important since quantitative analysis is the goal. Another important feature of the proposed technique is that no constant background is assumed, this making it well suited for nonsubtracted angiograms. The key aspect of the authors' approach is that continuity/smoothness constraints are not used to modify the estimates directly derived from the image (which would introduce distortion) but rather to elect (without modifying) candidate estimates. Robustness against unknown background is provided by the use a morphological edge operator, instead of some linear operator (such as a matched filter) which has to assume known background and known vessel shape.  相似文献   

11.
This paper addresses the problem of neuro-anatomical registration across individuals for functional [15O] water PET activation studies. A new algorithm for three-dimensional (3-D) nonlinear structural registration (warping) of MR scans is presented. The method performs a hierarchically scaled search for a displacement field, maximizing one of several voxel similarity measures derived from the two-dimensional (2-D) histogram of matched image intensities, subject to a regularizer that ensures smoothness of the displacement field. The effect of the nonlinear structural registration is studied when it is computed on anatomical MR scans and applied to coregistered [15O] water PET scans from the same subjects: in this experiment, a study of visually guided saccadic eye movements. The performance of the nonlinear warp is evaluated using multivariate functional signal and noise measures. These measures prove to be useful for comparing different intersubject registration approaches, e.g., affine versus nonlinear. A comparison of 12-parameter affine registration versus non-linear registration demonstrates that the proposed nonlinear method increases the number of voxels retained in the cross-subject mask. We demonstrate that improved structural registration may result in an improved multivariate functional signal-to-noise ratio (SNR). Furthermore, registration of PET scans using the 12-parameter affine transformations that align the coregistered MR images does not improve registration, compared to 12-parameter affine alignment of the PET images directly.  相似文献   

12.
Guo  J.-M. 《Electronics letters》2008,44(7):462-464
Block truncation coding (BTC) is an efficient compression technique, offering good image quality. Nonetheless, the blocking effect inherent in BTC causes severe perceptual artefact when compression ratio is increased. Conversely, error diffusion (EDF) enjoys the benefit of diffusing the quantised error into neighbouring pixels. Consequently, the average tones in any local areas of the error-diffused image are preserved unchanged. Presented is a hybrid approach which combines the proposed modified EDF with BTC. As documented in experimental results, image quality is much better than BTC, and the complexity is even lower than traditional BTC.  相似文献   

13.
This study contributes towards the relatively new but growing discipline of QoE management in content delivery systems. The study focuses on the development of a QoE-based management framework for the construction of QoE models for different types of multimedia contents delivered onto three typical mobile terminals—a mobile phone, PDA and a laptop. A statistical modelling technique is employed which, correlates QoS parameters with estimates of QoE perceptions. These correlations were found to be dependent on terminals and multimedia content types. The application of the framework and prediction models in QoE management strategies are demonstrated using examples. We find that significant resource savings can be achieved with our approach by contrast to conventional QoS solutions.  相似文献   

14.
15.
目前存在的CS恢复算法中大都采用固定的基函数,也就是在确定的域中对信号进行分解,比如:DCT域、小波域和梯度域,但这些域都忽略了自然信号的非平稳特性,缺乏白适应能力,从而不能够将图像分解得足够稀疏,也就使得CS恢复的效果很差,限制了CS在图像方面的应用。提出了一种基于分离Bregman迭代方法求解协同稀疏模型正则化的图像压缩感知恢复算法,能够在有效地刻画图像的局部平滑性和非局部自相似性的同时,获得更高质量的图像恢复效果。实验证明了本文提出算法的有效性,并且在峰值信噪比PSNR方面,比目前主流最好的算法高1dB。  相似文献   

16.
This paper investigates various classification techniques, applied to subband coding of images, as a way of exploiting the nonstationary nature of image subbands. The advantages of subband classification are characterized in a rate-distortion framework in terms of "classification gain" and overall "subband classification gain." Two algorithms, maximum classification gain and equal mean-normalized standard deviation classification, which allow unequal number of blocks in each class, are presented. The dependence between the classification maps from different subbands is exploited either directly while encoding the classification maps or indirectly by constraining the classification maps. The trade-off between the classification gain and the amount of side information is explored. Coding results for a subband image coder based on classification are presented. The simulation results demonstrate the value of classification in subband coding.  相似文献   

17.
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.  相似文献   

18.
Sparse representation is a new approach that has received significant attention for image classification and recognition. This paper presents a PCA-based dictionary building for sparse representation and classification of universal facial expressions. In our method, expressive facials images of each subject are subtracted from a neutral facial image of the same subject. Then the PCA is applied to these difference images to model the variations within each class of facial expressions. The learned principal components are used as the atoms of the dictionary. In the classification step, a given test image is sparsely represented as a linear combination of the principal components of six basic facial expressions. Our extensive experiments on several publicly available face datasets (CK+, MMI, and Bosphorus datasets) show that our framework outperforms the recognition rate of the state-of-the-art techniques by about 6%. This approach is promising and can further be applied to visual object recognition.  相似文献   

19.
We present a new matrix vector formulation of a wavelet-based subband decomposition. This formulation allows for the decomposition of both the convolution operator and the signal in the subband domain. With this approach, any single channel linear space-invariant filtering problem can be cast into a multichannel framework. We apply this decomposition to the linear space-invariant image restoration problem and propose a family of multichannel linear minimum mean square error (LMMSE) restoration filters. These filters explicitly incorporate both within and between subband (channel) relations of the decomposed image. Since only within channel stationarity is assumed in the image model, this approach presents a new method for modeling the nonstationarity of images. Experimental results are presented which test the proposed multichannel LMMSE filters. These experiments show that if accurate estimates of the subband statistics are available, the proposed multichannel filters provide major improvements over the traditional single channel filters.  相似文献   

20.
利用薄板样条函数实现非刚性图像匹配算法   总被引:7,自引:2,他引:7       下载免费PDF全文
孙冬梅  裘正定 《电子学报》2002,30(8):1104-1107
提出了一种利用薄板样条函数实现非刚性图像匹配的新方法 .该方法是将图像表示成由特征点构成的特征点集 ,利用薄板样条 (TPS)能够将形变清楚地分解为仿射分量和非仿射分量的独特性质 ,应用TPS函数来表征特征点集之间的非刚性映射 ,并将TPS映射参数的求解嵌入到确定性退火技术的框架中 .首先提出基于TPS弯曲能的非刚性匹配的能量函数 ,然后采用确定性退火技术 ,迭代求解点集之间的匹配矩阵和映射参数 .与其它的非刚性匹配算法相比 ,该算法不仅保证了图像特征点之间的一一对应的双向约束 ,同时避免了陷入局部极小 ,而且具有较强的鲁棒性 .实验结果证实了所提算法的有效性和鲁棒性 .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号