首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a segmentation method for grayscale images in the framework of the approach based on region merging is proposed. The initial segmentation is obtained by finding edges in the image and its splitting into regions of a given shape. The adopted segmentation criterion is formulated in terms of the least squares fit to the image intensity function and this can be accomplished by finding an optimal partition with respect to the introduced information measure. Andrey Georgievich Bronevich. Born 1966. Graduated from the Taganrog State University of Radio Engineering in 1988. Received candidate’s degree in 1994 and doctoral degree in 2004. Professor at the Taganrog State University of Radio Engineering. Scientific interests: number theory, mathematical statistics, possibility theory, theory of nonadditive measures, classification models, and pattern recognition. Author of more than 50 papers. Member of the Russian Association for Artificial Intelligence. Oleg Sergeevich Semeriy. Born 1978. Graduated from the Taganrog State University of Radio Engineering in 2001. Received candidate’s degree in 2004. Scientific interests: computer vision, computer graphics, control theory, and robotics. Author of more than 30 papers.  相似文献   

2.
The problem of segmentation by several images is considered for fragments of a scene containing a given object and its neighborhood. To solve this problem, generalizations of the well-known methods of quantiles and modes are proposed that are constructed on the concept of random distance. The results of computer experiments on the segmentation of model and quasireal scenes by the methods proposed are compared. Vyatcheslav Borisovich Fofanov. Born 1948. Graduated from the Kazan State University in 1971. Received candidate’s degree in engineering in 1977. Currently is an assistant professor at Kazan State University. Scientific interests: pattern recognition and image deciphering. Author of more than 40 publications. Ramil’ Fuatovich Kuleev. Born 1985. Graduated from the Kazan State University in 2007. Currently is a postgraduate student at the Chair of Economical Cybernetics, Kazan State University. Scientific interests: pattern recognition and image deciphering. Author of eight publications.  相似文献   

3.
Customer Segmentation is an increasingly pressing issue in today’s over-competitive commercial area. More and more literatures have researched the application of data mining technology in customer segmentation, and achieved sound effectives. But most of them segment customer only by single data mining technology from a special view, rather than from systematical framework. Furthermore, one of the key purposes of customer segmentation is customer retention. Although previous segment methods may identify which group needs more care, it is unable to identify customer churn trend for taking different actions. This paper focus on proposing a customer segmentation framework based on data mining and constructs a new customer segmentation method based on survival character. The new customer segmentation method consists of two steps. Firstly, with K-means clustering arithmetic, customers are clustered into different segments in which customers have the similar survival characters (churn trend). Secondly, each cluster’s survival/hazard function is predicted by survival analyzing, the validity of clustering is tested and customer churn trend is identified. The method mentioned above has been applied to a dataset from China Telecom, which acquired some useful management measures and suggestions. Some propositions for further research is also suggested.  相似文献   

4.
We introduce a segmentation-based detection and top-down figure-ground delineation algorithm. Unlike common methods which use appearance for detection, our method relies primarily on the shape of objects as is reflected by their bottom-up segmentation. Our algorithm receives as input an image, along with its bottom-up hierarchical segmentation. The shape of each segment is then described both by its significant boundary sections and by regional, dense orientation information derived from the segment’s shape using the Poisson equation. Our method then examines multiple, overlapping segmentation hypotheses, using their shape and color, in an attempt to find a “coherent whole,” i.e., a collection of segments that consistently vote for an object at a single location in the image. Once an object is detected, we propose a novel pixel-level top-down figure-ground segmentation by “competitive coverage” process to accurately delineate the boundaries of the object. In this process, given a particular detection hypothesis, we let the voting segments compete for interpreting (covering) each of the semantic parts of an object. Incorporating competition in the process allows us to resolve ambiguities that arise when two different regions are matched to the same object part and to discard nearby false regions that participated in the voting process. We provide quantitative and qualitative experimental results on challenging datasets. These experiments demonstrate that our method can accurately detect and segment objects with complex shapes, obtaining results comparable to those of existing state of the art methods. Moreover, our method allows us to simultaneously detect multiple instances of class objects in images and to cope with challenging types of occlusions such as occlusions by a bar of varying size or by another object of the same class, that are difficult to handle with other existing class-specific top-down segmentation methods.  相似文献   

5.
An adaptive algorithm for automatic segmentation of objects in cytological images is described. The algorithm is based on the well-known seeded region growing method (SGR), is robust to noise, and can handle low contrast images. The algorithm allows for automatic adjustment of its parameter values and initialization of cluster growth. Oleg L. Konevsky. Born 1973. Received master’s degree in engineering from Novgorod State University in 1995 and candidate’s degree (Eng.) from St. Petersburg State Technical University in 1998. Since 2000, an associate professor at the Information Technologies and Systems Department, Novgorod State University. Scientific interests: image segmentation, mathematical morphology, and neural networks. Author of nearly 30 papers in the field of pattern recognition and image analysis. Member of IEEE, IEEE Computer Society, and IEEE Signal Processing Society. Yurii V. Stepanets. Born 1980. Received master’s degree in engineering from Novgorod State University in 2002. Currently, post-graduate student at the same university. Scientific interests: image segmentation, artificial intelligence, and neural networks. Author of ten papers in the field of pattern recognition and image analysis.  相似文献   

6.
In this paper a system for laboratory rodent video tracking and behavior segmentation is proposed. A new real-time mouse pose estimation method is proposed based on semi-automatically generated animal shape model. Behavior segmentation into separate behavior acts is considered as a signal segmentation problem using hidden Markov models (HMM). Conventional first order HMM supposes a geometric prior distribution on segment’s length, which is inadequate for behavior segmentation. We propose a modification of conventional first order HMM that allows any prior distribution on segment’s length. Experiments show that the developed approach can lead to more adequate results comparing to conventional HMM.  相似文献   

7.
融合全局和局部相关熵的图像分割   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 针对LCK(local correntropy-based K-means)模型对初始轮廓敏感的问题,提出了新的基于全局和局部相关熵的GLCK(global and local correntropy-based K-means)动态组合模型。方法 首先将相关熵准则引入到CV(Chan-Vese)模型中,得到新的基于全局相关熵的GCK(global correntropy-based K-means)模型。然后,结合LCK模型,提出GLCK组合模型,并给出一种动态组合算法来优化GLCK模型。该模型分两步来完成分割:第1步,用GCK模型分割出目标的大致轮廓;第2步,将上一步得到的轮廓作为LCK模型的初始轮廓,对图像进行精确分割。结果 主观上,对自然图像和人工合成图像进行分割,并同LCK模型、LBF模型以及CV模型进行对比,结果表明本文所提模型的鲁棒性比上述模型都要好;客观上,对BSD库中的两幅自然图像进行分割,并采用Jaccard相似性比率进行定量分析,准确率分别为91.37%和89.12%。结论 本文算法主要适用于分割含有未知噪声及灰度分布不均匀的医学图像及结构简单的自然图像,并且分割结果对初始轮廓具有鲁棒性。  相似文献   

8.
目的 通过对现有基于区域的活动轮廓模型能量泛函的Euler-Lagrange方程进行变形,建立其与K-means方法的等价关系,提出一种新的基于K-means活动轮廓模型,该模型能有效分割灰度非同质图像。方法 结合图像全局和局部信息,根据交互熵的特性,提出新的局部自适应权重,它根据像素点所在邻域的局部统计信息自适应地确定各个像素点的分割阈值,排除灰度非同质分割目标的影响。结果 采用Jaccard相似系数-JS(Jaccard similarity)和Dice相似系数-DSC(Dice similarity coefficient)两个指标对自然以及合成图像的分割结果进行定量分析,与传统及最新经典的活动轮廓模型相比,新模型JS和DSC的值最接近1,且迭代次数不多于50次。提出的模型具有较高的计算效率和准确率。结论 通过大量实验发现,新模型结合图像全局和局部信息,利用交互熵特性得到自适应权重,对初始曲线位置具有稳定性,且对灰度非同质图像具有较好地分割效果。本文算法主要适用于分割含有噪声及灰度非同质的医学图像,而且分割结果对初始轮廓具有鲁棒性。  相似文献   

9.
This paper presents a probabilistic framework based on Bayesian theory for the performance prediction and selection of an optimal segmentation algorithm. The framework models the optimal algorithm selection process as one that accounts for the information content of an input image as well as the behavioral properties of a particular candidate segmentation algorithm. The input image information content is measured in terms of image features while the candidate segmentation algorithm’s behavioral characteristics are captured through the use of segmentation quality features. Gaussian probability distribution models are used to learn the required relationships between the extracted image and algorithm features and the framework tested on the Berkeley Segmentation Dataset using four candidate segmentation algorithms.  相似文献   

10.
People dynamically structure social interactions and activities at various locations in their environments in specialized types of places such as the office, home, coffee shop, museum and school. They also imbue various locations with personal meaning, creating group ‘hangouts’ and personally meaningful ‘places’. Mobile location-aware community systems can potentially utilize the existence of such ‘places’ to support the management of social information and interaction. However, acting effectively on this potential requires an understanding of how: (1) places and place-types relate to people’s desire for place-related awareness of and communication with others; and (2) what information people are willing to provide about themselves to enable place-related communication and awareness. We present here the findings from two qualitative studies, a survey of 509 individuals in New York, and a study of how mobility traces can be used to find people’s important places in an exploration of these questions. These studies highlight how people value and are willing to routinely provide information such as ratings, comments, event records relevant to a place, and when appropriate their location to enable services. They also suggest how place and place-type data could be used in conjunction with other information regarding people and places so that systems can be deployed that respect users’ People-to-People-to-Places data sharing preferences. We conclude with a discussion on how ‘place’ data can best be utilized to enable services when the systems in question are supported by a sophisticated computerized user-community social-geographical model.  相似文献   

11.
目的:交通场景中车辆间的距离过近或相互遮挡容易造成识别上的粘连,增加了准确检测目标车辆的难度,因此需要建立有效、可靠的遮挡车辆分割机制。方法:首先在图像分块的基础上确定出车辆区域,根据车辆区域的长宽比和占空比进行多车判断;然后提出了一种基于“七宫格”的凹陷区域检测算法,用以找出车辆间的凹陷区域,通过匹配对应的凹陷区域得到遮挡区域;最后,将遮挡区域内检测出的车辆边缘轮廓作为分割曲线,从而分割遮挡车辆。结果:实验结果表明,在满足实时性的前提下算法具有较高的识别率,且能够按车辆的边缘轮廓准确分割多个相互遮挡的车辆。与其他算法相比,该算法提高了分割成功率和分割精度,查全率和查准率均可达到90%。结论:提出了一种新的遮挡车辆分割方法,有效解决了遮挡车辆不宜分割和分割不准确的问题,具有较强的适应性。  相似文献   

12.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets–Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.  相似文献   

13.
14.
In the paper, results of theoretical and experimental studies of dynamics of parameters of short correlation functions of speech signals are exposed. We present the results of the theoretical investigation of the dependence of the maxima of correlation functions of quasi-periodic signals in a general form on the characteristics defining the degree of their quasi-periodicity. Based on these dependences, estimates for the parameters of the quasi-periodicity degree, of the length of quasi-periodic intervals, of the main period, etc. are constructed. The obtained estimates are interpreted in the case of speech signals, and it is shown that many important parameters used in speech technologies can be calculated from them. Experimental results on segmentation of isolated words into separate phonemes are given as an example of efficiency of the approach based on the analysis of correlation functions. It is shown that the segmentation based on monitoring of singularities of dynamics of parameters of short correlation turns out to be stable and adequate to perception. Vyacheslav E. Antsiperov. Born in 1959. Graduated from the Moscow Institute of Physics and Technology in 1982. Received candidate’s degree in 1986. Senior Researcher at the Institute of Radio Engineering and Electronics, Russian Academy of Sciences. Scientific interests: information theory, recognition and identification theory, and computer modeling of biological aspects of human activities, including speech and image recognition and modeling of other functions of central nervous system. Author of more than 30 papers. Vladimir A. Morozov. Born in 1932. Graduated from the Moscow Institute of Energetics in 1956. Received candidate’s degree in 1964. Chief of the Statistical Radiophysics Department of the Institute of Radio Engineering and Electronics, Russian Academy of Sciences. Scientific interests: information theory, weak signal detection, signal processing, stochastic recognition, and speech recognition. Author of more than 80 papers. Sergei A. Nikitov. Born in 1955. Graduated from the Moscow Institute of Physics and Technology in 1979. Received candidate’s degree in 1982 and Doctoral degree in 1991. Professor, Principal Researcher, the Head of Laboratory at the Institute of Radio Engineering and Electronics, Russian Academy of Sciences. Since 2004 corresponding member of the Russian Academy of Sciences. Scientific interests: informatics, including speech processing, magnetoelectronics, and nonlinear dynamics. Author of more than 100 papers.  相似文献   

15.
In this paper, we present an extensive experimental comparison of existing similarity metrics addressing the quality assessment problem of mesh segmentation. We introduce a new metric, named the 3D Normalized Probabilistic Rand Index (3D-NPRI), which outperforms the others in terms of properties and discriminative power. This comparative study includes a subjective experiment with human observers and is based on a corpus of manually segmented models. This corpus is an improved version of our previous one (Benhabiles et al. in IEEE International Conference on Shape Modeling and Application (SMI), 2009). It is composed of a set of 3D-mesh models grouped in different classes associated with several manual ground-truth segmentations. Finally the 3D-NPRI is applied to evaluate six recent segmentation algorithms using our corpus and the Chen et al.’s (ACM Trans. Graph. (SIGGRAPH), 28(3), 2009) corpus.  相似文献   

16.
改进K-means活动轮廓模型   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 通过对C-V模型能量泛函的Euler-Lagrange方程进行变形,建立其与K-means方法的等价关系,提出一种新的基于水平集函数的改进K-means活动轮廓模型。方法 该模型包含局部自适应权重矩阵函数,它根据像素点所在邻域的局部统计信息自适应地确定各个像素点的分割阈值,排除灰度非同质对分割目标的影响,进而实现对灰度非同质图像的精确分割。结果 通过分析对合成以及自然图像的分割结果,与传统及最新经典的活动轮廓模型相比,新模型不仅能较准确地分割灰度非同质图像,而且降低了对初始曲线选取的敏感度。结论 提出了包含权重矩阵函数的新活动轮廓模型,根据分割目的和分割图像性质,制定不同的权重函数,该模型具有广泛的适用性。文中给出的一种具有局部统计特性的权重函数,对灰度非同质图像的效果较好,且对初始曲线位置具有稳定性。  相似文献   

17.
This paper addresses segmentation of multiple sclerosis lesions in multispectral 3-D brain MRI data. For this purpose, we propose a novel fully automated segmentation framework based on probabilistic boosting trees, which is a recently introduced strategy for supervised learning. By using the context of a voxel to be classified and its transformation to an overcomplete set of Haar-like features, it is possible to capture class specific characteristics despite the well-known drawbacks of MR imaging. By successively selecting and combining the most discriminative features during ensemble boosting within a tree structure, the overall procedure is able to learn a discriminative model for voxel classification in terms of posterior probabilities. The final segmentation is obtained after refining the preliminary result by stochastic relaxation and a standard level set approach. A quantitative evaluation within a leave-one-out validation shows the applicability of the proposed method. The text was submitted by the authors in English. Michael Wels was born in 1979 and graduated with a degree in computer science from the University of Wuerzburg in 2006. Currently, he is a member of the research staff at the University of Erlangen-Nuremberg’s Institute of Pattern Recognition working towards his Ph.D. His research interests are medical imaging in general and the application of machine learning techniques to medical image segmentation. Martin Huber studied at the University of Karlsruhe and received his Ph.D. degree in computer science in 1999. Since 1996, he has been with Siemens Corporate Technology and Siemens Medical Solutions. He currently is technical coordinator of the EU funded project Health-e-Child with research interests in medical imaging and semantic data integration. Joachim Hornegger graduated with a degree in computer science (1992) and received his Ph.D. in applied computer science (1996) at the University of Erlangen-Nuremberg (Germany). His Ph.D. thesis was on statistical learning, recognition, and pose estimation of 3-D objects. Joachim was a visiting scholar and lecturer at Stanford University (Stanford, CA, USA) during the 1997–1998 academic year, and, in 2007–2008, he was a visiting professor at Stanford’s Radiological Science Laboratory. In 1998, he joined Siemens Medical Solutions Inc., where he was working on 3D angiography. In parallel with his responsibilities in industry, he was a lecturer at the Universities of Erlangen (1998–1999), Eichstaett-Ingolstadt (2000), and Mannheim (2000–2003). In 2003, Joachim became Professor of Medical Imaging Processing at the University of Erlangen-Nuremberg, and, since 2005, he has been a chaired professor heading the Institute of Pattern Recognition. His main research topics are currently pattern recognition methods in medicine and sports.  相似文献   

18.
For agents to collaborate in open multi-agent systems, each agent must trust in the other agents’ ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of another agents’ trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate trustworthiness: that the partner is competent and that the partner expects to engage in future interactions. This paper explores an approach that models competence as an agent’s probability of successfully performing an action, and models belief in future interactions as a discount factor. We evaluate the underlying decision framework’s performance given accurate knowledge of the model’s parameters in an evolutionary game setting. We then introduce a game-theoretic framework in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies in a variable-payoff Iterated Prisoner’s Dilemma setting.  相似文献   

19.
目的 MRI正逐步代替CT进行骨头与关节的检查,肩关节MRI中骨结构的精确自动分割对于骨损伤和疾病的度量与诊断至关重要,现有骨头分割算法无法做到不用任何先验知识进行自动分割,且通用性和精准度相对较低,为此提出一种基于图像块和全卷积神经网络(PCNN和FCN)相结合的自动分割算法。方法 首先建立4个分割模型,包括3个基于U-Net的骨头分割模型(肱骨分割模型、关节骨分割模型、肱骨头和关节骨作为整体的分割模型)和一个基于块的AlexNet分割模型;然后使用4个分割模型来获取候选的骨头区域,并通过投票的方式准确检测到肱骨和关节骨的位置区域;最后在检测到的骨头区域内进一步使用AlexNet分割模型,从而分割出精确度在像素级别的骨头边缘。结果 实验数据来自美国哈佛医学院/麻省总医院骨科的8组病人,每组扫描序列包括100片左右图像,都已经分割标注。5组病人用于训练和进行五倍的交叉验证,3组病人用于测试实际的分割效果,其中Dice Coefficient、Positive Predicted Value(PPV)和Sensitivity平均准确率分别达到0.92±0.02、0.96±0.03和0.94±0.02。结论 本文方法针对小样本的病人数据集,仅通过2维医学图像上的深度学习,可以得到非常精确的肩关节分割结果。所提算法已经集成到我们开发的医学图像度量分析平台"3DQI",通过该平台可以展示肩关节骨头3D分割效果,给骨科医生提供临床的诊断指导作用。同时,所提算法框架具有一定的通用性,适应于小样本数据下CT和MRI中特定器官和组织的精确分割。  相似文献   

20.
A Variational Model for Capturing Illusory Contours Using Curvature   总被引:1,自引:0,他引:1  
Illusory contours, such as the classical Kanizsa triangle and square [9], are intrinsic phenomena in human vision. These contours are not completely defined by real object boundaries, but also include illusory boundaries which are not explicitly present in the images. Therefore, the major computational challenge of capturing illusory contours is to complete the illusory boundaries. In this paper, we propose a level set based variational model to capture a typical class of illusory contours such as Kanizsa triangle. Our model completes missing boundaries in a smooth way via Euler’s elastica, and also preserves corners by incorporating curvature information of object boundaries. Our model can capture illusory contours regardless of whether the missing boundaries are straight lines or curves. We compare the choice of the second order Euler’s elastica used in our model and that of the first order Euler’s elastica developed in Nitzberg-Mumford-Shiota’s work on the problem of segmentation with depth [15, 16]. We also prove that with the incorporation of curvature information of objects boundaries our model can preserve corners as completely as one wants. Finally we present the numerical results by applying our model on some standard illusory contours. This work has been supported by ONR contract N00014-03-1-0888, NSF contract DMS-9973341 and NIH contract P20 MH65166. Wei Zhu received the B.S. degree in Mathematics from Tsinghua University in 1994, the M.S. degree in Mathematics from Peking University in 1999, and the Ph.D. degree in Applied Mathematics from UCLA in 2004. He is currently a Postdoc at Courant Institute, New York University. His research interests include mathematical problems in image processing and visual neuroscience. Tony Chan received the B.S. degree in engineering and the M.S. degree in aerospace engineering, both in 1973, from the California Institute of Technology, Pasadena, and the Ph.D. degree in computer science from Stanford University, Stanford, CA, in 1978. Heis currently the Dean of the Division of Physical Science and College of Letters and Science, UCLA, where he has been a Professor in the Department of Mathematics since 1986. His research interests include PDE methods for image processing, multigrid, and domain decomposition algorithms, iterative methods, Krylov subspace methods, and parallel algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号