首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
李磊  董卓莉  张德贤 《电子学报》2018,46(6):1312-1318
提出一种基于自适应区域限制FCM(Fuzzy C-Means)的彩色图像分割方法,结合隐马尔科夫模型,把超像素具有区域一致性作为先验知识自适应融入到聚类过程中,以提升聚类性能.算法首先生成图像的超像素,计算像素对该超像素的贡献度,以此计算该超像素的区域隶属度函数;然后根据像素所属超像素是否具有主标签,选择像素级隶属度函数或区域级隶属度函数计算该像素的点对先验概率,以加强分割结果的区域一致性;其中,使用区域隶属度函数将引导聚类优化的方向,因此在迭代过程中去除未被使用的标签;最后迭代终止获得图像的分割结果.实验结果表明,相对于比较算法,本文算法的分割性能有显著提升.  相似文献   

2.
The motion compensated discrete cosine transform coding (MCDCT) is an efficient image sequence coding technique. In order to further reduce the bit-rate for the quantizied DCT coefficients and keep the visual quality, we propose an adaptive edge-based quadtree motion compensated discrete cosine transform coding (EQDCT). In our proposed algorithm, the overhead moving information is encoded by a quadtree structure and the nonedge blocks will be encoded at lower bit-rate but the edge blocks will be encoded at higher bit-rate. The edge blocks will be further classified into four different classes according to the orientations and locations of the edges. Each class of edge blocks selects the different set of the DCT coefficients to be encoded. By this method, we can just preserve and encode a few DCT coefficients, but still maintain the visual quality of the images. In the proposed EQDCT image sequence coding scheme, the average bit-rate of each frame is reduced to 0.072 bit/pixel and the average PSNR value is 32.11 dB.  相似文献   

3.
EM image segmentation algorithm based on an inhomogeneous hidden MRF model   总被引:2,自引:0,他引:2  
This paper introduces a Bayesian image segmentation algorithm that considers the label scale variability of images. An inhomogeneous hidden Markov random field is adopted in this algorithm to model the label scale variability as prior probabilities. An EM algorithm is developed to estimate parameters of the prior probabilities and likelihood probabilities. The image segmentation is established by using a MAP estimator. Different images are tested to verify the algorithm and comparisons with other segmentation algorithms are carried out. The segmentation results show the proposed algorithm has better performance than others.  相似文献   

4.
Sonar image segmentation using an unsupervised hierarchical MRFmodel   总被引:14,自引:0,他引:14  
This paper is concerned with hierarchical Markov random field (MRP) models and their application to sonar image segmentation. We present an original hierarchical segmentation procedure devoted to images given by a high-resolution sonar. The sonar image is segmented into two kinds of regions: shadow (corresponding to a lack of acoustic reverberation behind each object lying on the sea-bed) and sea-bottom reverberation. The proposed unsupervised scheme takes into account the variety of the laws in the distribution mixture of a sonar image, and it estimates both the parameters of noise distributions and the parameters of the Markovian prior. For the estimation step, we use an iterative technique which combines a maximum likelihood approach (for noise model parameters) with a least-squares method (for MRF-based prior). In order to model more precisely the local and global characteristics of image content at different scales, we introduce a hierarchical model involving a pyramidal label field. It combines coarse-to-fine causal interactions with a spatial neighborhood structure. This new method of segmentation, called the scale causal multigrid (SCM) algorithm, has been successfully applied to real sonar images and seems to be well suited to the segmentation of very noisy images. The experiments reported in this paper demonstrate that the discussed method performs better than other hierarchical schemes for sonar image segmentation.  相似文献   

5.
多尺度Markov模型的可适应图像分割方法   总被引:1,自引:1,他引:0       下载免费PDF全文
郭小卫  田铮  林伟  熊毅 《电子学报》2005,33(7):1279-1283
本文在图像分割的TSMAP(trainable sequential maximum a posterior)方法基础上,提出基于多尺度Markov模型的可适应ATSMAP(adaptive TSMAP)图像分割方法.在给定训练图像及其基本真实分割(ground truth segmentation,GTS)的基础上,通过直接对原始图像的GTS进行小波变换产生粗尺度上的GTS,进而估计出图像数据的分布参数和Markov四叉树模型参数;上下文模型参数根据上下文的低维特征(类别数量特征)而非上下文本身来估计.该方法具有上下文模型参数估计计算量小,Markov四叉树模型参数可针对特定的待分割图像重新优化等优点(模型适应过程),解决了TSMAP方法易导致过学习的问题,在待分割图像与训练图像的统计特性不匹配的情况下,仍能给出较好的分割结果.对合成图像与SAR图像的实验结果表明,这种方法的分割精度高于TSMAP和其它几种基于多尺度Markov模型的图像分割方法.  相似文献   

6.
针对虚拟战场环境仿真中构建大规模地形的速度问题,介绍了一种基于视点和分块的LOD大规模地形生成算法,能够快速地构建海量数据的地形模型.该算法首先对海量地形数据分块后以四叉树结构存储;然后应用简单的圆锥视景体快速裁剪地形块,丢弃与视景体相离的地形块,对于与视景体相交地形块剪裁出位于视景体内的四叉树结点.根据视点到节点距离和地形本身的粗糙度,选用合适的LOD渲染地形,最后应用基于视点的方法消除裂缝.在数据相同情况下,该算法比四叉树算法减少约1/2时间,并且更逼真地生成了大规模数据地形.本算法也可用于逆向工程设计中重建物体等问题.  相似文献   

7.
基于相位相关的目标图像亚像元运动参数估计   总被引:1,自引:1,他引:1  
为获取亚像元级目标运动参数,提出一种基于相位相关分析的图像配准方法。首先讨论了目标局部运动和全局运动的目标参数估计问题,通过图像减影运算和模块匹配方法实现粗配准,从全景图像中分离目标信息和背景信息,计算目标中心坐标,获取像元级运动参数;然后采用相位相关图像配准方法实现精配准,利用傅里叶变换的平移特性,对产生平移的目标图像,通过求解归一化的互功率谱的傅立叶逆变换,得到二维脉冲函数,其峰值对应图像位移,由此获取亚像元级位移量。在实验室通过自准直光学系统获取光斑运动图像,使用Leica经纬仪标定光斑运动参数精度。结果表明,该方法效果显著,最大配准误差为0.156,标准差为0.091,配准精度优于1/10像元。  相似文献   

8.
The bag of visual words (BOW) model is an efficient image representation technique for image categorization and annotation tasks. Building good visual vocabularies, from automatically extracted image feature vectors, produces discriminative visual words, which can improve the accuracy of image categorization tasks. Most approaches that use the BOW model in categorizing images ignore useful information that can be obtained from image classes to build visual vocabularies. Moreover, most BOW models use intensity features extracted from local regions and disregard colour information, which is an important characteristic of any natural scene image. In this paper, we show that integrating visual vocabularies generated from each image category improves the BOW image representation and improves accuracy in natural scene image classification. We use a keypoint density-based weighting method to combine the BOW representation with image colour information on a spatial pyramid layout. In addition, we show that visual vocabularies generated from training images of one scene image dataset can plausibly represent another scene image dataset on the same domain. This helps in reducing time and effort needed to build new visual vocabularies. The proposed approach is evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories, respectively, using 10-fold cross-validation. The experimental results, using support vector machines with histogram intersection kernel, show that the proposed approach outperforms baseline methods such as Gist features, rgbSIFT features and different configurations of the BOW model.  相似文献   

9.
Image segmentation partitions an image into nonoverlapping regions, which ideally should be meaningful for a certain purpose. Automatic segmentation of images is a very challenging fundamental task in computer vision and one of the most crucial steps toward image understanding. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we present an effective color image segmentation approach based on pixel classification with least squares support vector machine (LS-SVM). Firstly, the pixel-level color feature, Homogeneity, is extracted in consideration of local human visual sensitivity for color pattern variation in HSV color space. Secondly, the image pixel’s texture features, Maximum local energy, Maximum gradient, and Maximum second moment matrix, are represented via Gabor filter. Then, both the pixel-level color feature and texture feature are used as input of LS-SVM model (classifier), and the LS-SVM model (classifier) is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained LS-SVM model (classifier). This image segmentation not only can fully take advantage of the local information of color image, but also the ability of LS-SVM classifier. Experimental evidence shows that the proposed method has very effective segmentation results and computational behavior, and decreases the time and increases the quality of color image segmentation in comparison with the state-of-the-art segmentation methods recently proposed in the literature.  相似文献   

10.
Optimization based on mean-field theory, including mean-field annealing (MFA), is widely used for discrete optimization (label assignment) problems defined on the pixel sites of an image. One formulation of MFA is via maximum entropy, where one seeks the joint distribution over the (random) assignments subject to an average level of cost. MFA is obtained by assuming the assignments at each pixel are statistically independent, given the observed image. Alternatively, we make the less restrictive assumption of independent row labelings. The independence assumption means that at each step, MFA optimizes over only one pixel, whereas our method jointly optimizes over an entire row, i.e., our method is less greedy. In principle, an MFA extension could be developed that explicitly re-estimates the row labeling distributions, but such an approach is, in practice, infeasible. Even so, we can indirectly implement this re-estimation, by re-estimating quantities that determine the row labeling distributions. These quantities are the a posteriori site probabilities, re-estimable via the well-known forward/backward (F/B) algorithm. Thus, our algorithm, which descends in the ME Lagrangian/free energy, consists of iterative application of F/B to the image rows (columns). At convergence, maximum a posteriori site labeling is performed. Our method was applied to segmentation of both real and synthetic noise-corrupted images. It achieved lower Markov random field model potentials and better segmentations compared with other methods, and, in high noise, standard MFA.  相似文献   

11.
Multiscale Bayesian segmentation using a trainable context model   总被引:12,自引:0,他引:12  
Multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to encourage large uniformly classified regions. Consequently, these context models have a limited ability to capture the complex contextual dependencies that are important in applications such as document segmentation. We propose a multiscale Bayesian segmentation algorithm which can effectively model complex aspects of both local and global contextual behavior. The model uses a Markov chain in scale to model the class labels that form the segmentation, but augments this Markov chain structure by incorporating tree based classifiers to model the transition probabilities between adjacent scales. The tree based classifier models complex transition rules with only a moderate number of parameters. One advantage to our segmentation algorithm is that it can be trained for specific segmentation applications by simply providing examples of images with their corresponding accurate segmentations. This makes the method flexible by allowing both the context and the image models to be adapted without modification of the basic algorithm. We illustrate the value of our approach with examples from document segmentation in which test, picture and background classes must be separated.  相似文献   

12.
Discrete Markov image modeling and inference on the quadtree   总被引:17,自引:0,他引:17  
Noncasual Markov (or energy-based) models are widely used in early vision applications for the representation of images in high-dimensional inverse problems. Due to their noncausal nature, these models generally lead to iterative inference algorithms that are computationally demanding. In this paper, we consider a special class of nonlinear Markov models which allow one to circumvent this drawback. These models are defined as discrete Markov random fields (MRF) attached to the nodes of a quadtree. The quadtree induces causality properties which enable the design of exact, noniterative inference algorithms, similar to those used in the context of Markov chain models. We first introduce an extension of the Viterbi algorithm which enables exact maximum a posteriori (MAP) estimation on the quadtree. Two other algorithms, related to the MPM criterion and to Bouman and Shapiro's (1994) sequential-MAP (SMAP) estimator are derived on the same hierarchical structure. The estimation of the model hyper parameters is also addressed. Two expectation-maximization (EM)-type algorithms, allowing unsupervised inference with these models are defined. The practical relevance of the different models and inference algorithms is investigated in the context of image classification problem, on both synthetic and natural images.  相似文献   

13.
王玉  李玉  赵泉华 《信号处理》2014,30(10):1193-1203
自动确定地物类别数是SAR图像分割方法研究的重点和难点问题,为此,提出一种自动确定类别数的SAR图像分割算法。首先假定SAR图像中各像素强度服从同一独立的Gamma分布并以此建立图像模型;根据贝叶斯定理构建刻画图像分割的后验概率模型;设计RJMCMC(Reversible Jump Markov Chain Monte Carlo)算法模拟该后验概率模型,以确定图像类别数并同时完成区域分割。在提出的RJMCMC算法中,设计的移动操作类型包括:分裂或合并实类、改变参数矢量、改变标号及生成或删除空类。为了验证提出的可变类分割算法,分别对真实及模拟SAR图像进行可变类分割实验,定性及定量精度评价结果表明该算法的可行性及有效性。   相似文献   

14.
基于图像的整体稀疏表示和图像块的局部特性,融合图像块低维流形特性和整幅图像在解析轮廓波表示下的稀疏性两种先验知识,该文提出了一种高质量压缩成像算法。该算法利用迭代硬阈值法和流形投影法重构图像。为减小运算复杂度,该文用多个线性子流形的并集来近似表示包含所有图像块的非线性流形,并根据图像块的主方向进行初始分类后再用稀疏正交变换获得各线性子空间的基。实验结果表明,该文算法的重构图像在峰值信噪比和视觉效果两方面均有显著提高。  相似文献   

15.
Hidden Markov Bayesian texture segmentation using complex wavelet transform   总被引:4,自引:0,他引:4  
The authors propose a multiscale Bayesian texture segmentation algorithm that is based on a complex wavelet domain hidden Markov tree (HMT) model and a hybrid label tree (HLT) model. The HMT model is used to characterise the statistics of the magnitudes of complex wavelet coefficients. The HLT model is used to fuse the interscale and intrascale context information. In the HLT, the interscale information is fused according to the label transition probability directly resolved by an EM algorithm. The intrascale context information is also fused so as to smooth out the variations in the homogeneous regions. In addition, the statistical model at pixel-level resolution is formulated by a Gaussian mixture model (GMM) in the complex wavelet domain at scale 1, which can improve the accuracy of the pixel-level model. The experimental results on several texture images are used to evaluate the algorithm.  相似文献   

16.
马丹丹 《红外与激光工程》2021,50(10):20210120-1-20210120-8
提出基于分块匹配的合成孔径雷达(Synthetic Aperture Radar,SAR)目标识别方法。对待识别SAR图像进行4分块处理,每个分块描述目标的局部区域。对于每个分块,基于单演信号构造特征矢量,描述其时频分布以及局部细节信息。单演信号从幅度、相位以及局部方位3个层次对图像进行分解,可有效描述图像的局部变化情况,对于扩展操作条件下的目标变化分析具有重要的参考意义。对于构造得到的4个特征矢量,分别采用稀疏表示分类(Sparse Representation-based Classification,SRC)分别进行分类,获得相应的重构误差矢量。在此基础上,按照线性加权融合的基本思想,通过构造随机权值矩阵进行分析。对于不同权值矢量下获得的结果,经统计分析构造有效的决策变量,通过比较不同训练类别的结果,判定测试样本的类别。所提方法在特征提取和分类决策过程中充分考虑SAR图像获取条件的不确定,通过统计分析获得最优决策结果。实验在MSTAR数据集上设置和开展,包含了1类标准操作条件和3类扩展操作条件。通过与现有几类方法对比,有效证明了所提方法的有效性。  相似文献   

17.
Automatic image annotation has been an active topic of research in the field of computer vision and pattern recognition for decades. In this paper, we present a new method for automatic image annotation based on Gaussian mixture model (GMM) considering cross-modal correlations. To be specific, we first employ GMM fitted by the rival penalized expectation-maximization (RPEM) algorithm to estimate the posterior probabilities of each annotation keyword. Next, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity by seamlessly integrating the information from both image low level visual features and high level semantic concepts together, which can effectively avoid the phenomenon that different images with the same candidate annotations would obtain the same refinement results. Followed by the rank-two relaxation heuristics over the built label similarity graph is applied to further mine the correlation of the candidate annotations so as to capture the refining annotation results, which plays a crucial role in the semantic based image retrieval. The main contributions of this work can be summarized as follows: (1) Exploiting GMM that is trained by the RPEM algorithm to capture the initial semantic annotations of images. (2) The label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels. (3) Refining the candidate set of annotations generated by the GMM through solving the max-bisection based on the rank-two relaxation algorithm over the weighted label graph. Compared to the current competitive model SGMM-RW, we can achieve significant improvements of 4% and 5% in precision, 6% and 9% in recall on the Corel5k and Mirflickr25k, respectively.  相似文献   

18.
Most remote sensing images exhibit a clear hierarchical structure which can be taken into account by defining a suitable model for the unknown segmentation map. To this end, one can resort to the tree-structured Markov random field (MRF) model, which describes a K-ary field by means of a sequence of binary MRFs, each one corresponding to a node in the tree. Here we propose to use the tree-structured MRF model for supervised segmentation. The prior knowledge on the number of classes and their statistical features allows us to generalize the model so that the binary MRFs associated with the nodes can be adapted freely, together with their local parameters, to better fit the data. In addition, it allows us to define a suitable likelihood term to be coupled with the TS-MRF prior so as to obtain a precise global model of the image. Given the complete model, a recursive supervised segmentation algorithm is easily defined. Experiments on a test SPOT image prove the superior performance of the proposed algorithm with respect to other comparable MRF-based or variational algorithms.  相似文献   

19.
The use of visual search for knowledge gathering in image decision support   总被引:1,自引:0,他引:1  
This paper presents a new method of knowledge gathering for decision support in image understanding based on information extracted from the dynamics of saccadic eye movements. The framework involves the construction of a generic image feature extraction library, from which the feature extractors that are most relevant to the visual assessment by domain experts are determined automatically through factor analysis. The dynamics of the visual search are analyzed by using the Markov model for providing training information to novices on how and where to look for image features. The validity of the framework has been evaluated in a clinical scenario whereby the pulmonary vascular distribution on Computed Tomography images was assessed by experienced radiologists as a potential indicator of heart failure. The performance of the system has been demonstrated by training four novices to follow the visual assessment behavior of two experienced observers. In all cases, the accuracy of the students improved from near random decision making (33%) to accuracies ranging from 50% to 68%.  相似文献   

20.
在地形可视化领域,实时绘制复杂地形场景的最有效工具是LOD技术。结合四叉树数据结构与不规则三角网数据结构的优点,提出一种混合数据结构的地形简化算法。算法通过使用不同的误差阈值,实现了视点相关的地形简化。同时通过有效的误差控制原则,解决了不规则三角网分块之间的拼接问题,消除了地形裂缝。实验结果证明了算法能高效地生成地形的连续多分辨率模型,实现地形场景的平滑绘制。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号