首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In high-dimensional data, clusters of objects usually exist in subspaces; besides, different clusters probably have different shape volumes. Most existing methods for high-dimensional data clustering, however, only consider the former factor. They ignore the latter factor by assuming the same shape volume value for different clusters. In this paper we propose a new Gaussian mixture model (GMM) type algorithm for discovering clusters with various shape volumes in subspaces. We extend the GMM clustering method to calculate a local weight vector as well as a local variance within each cluster, and use the weight and variance values to capture main properties that discriminate different clusters, including subsets of relevant dimensions and shape volumes. This is achieved by introducing negative entropy of weight vectors, along with adaptively-chosen coefficients, into the objective function of the extended GMM. Experimental results on both synthetic and real datasets show that the proposed algorithm outperforms its competitors, especially when applying to high-dimensional datasets.  相似文献   

2.
For hyperspectral target detection, it is usually the case that only part of the targets pixels can be used as target signatures, so can we use them to construct the most proper background subspace for detecting all the probable targets? In this paper, a dynamic subspace detection (DSD) method which establishes a multiple detection framework is proposed. In each detection procedure, blocks of pixels are calculated by the random selection and the succeeding detection performance distribution analysis. Manifold analysis is further used to eliminate the probable anomalous pixels and purify the subspace datasets, and the remaining pixels construct the subspace for each detection procedure. The final detection results are then enhanced by the fusion of target occurrence frequencies in all the detection procedures. Experiments with both synthetic and real hyperspectral images (HSI) evaluate the validation of our proposed DSD method by using several different state-of-the-art methods as the basic detectors. With several other single detectors and multiple detection methods as comparable methods, improved receiver operating characteristic curves and better separability between targets and backgrounds by the DSD methods are illustrated. The DSD methods also perform well with the covariance-based detectors, showing their efficiency in selecting covariance information for detection.  相似文献   

3.
为实现角点的有效检测,提高检测速度,提出一种基于随机 Hough变换的角点检测方法。利用随机 Hough变换求取出直线参数;根据角点在 Hough空间中的特征,利用反 Hough变换的反演原理对参数空间中的峰值进行反变换,定位图像空间中的直线交点;为避免虚假角点,将那些附近不包含任何边缘的交点删除,得到正确的角点。实验结果表明,该方法相对于 Harris算法和SUSAN具有更好的准确性、鲁棒性和稳定性,实时性也有一定提高。  相似文献   

4.
Hough transform is a well-known method for detecting parametric curves in binary images. One major drawback of the method is that the transform requires time and memory space exponential in the number of parameters of the curves. An effective approach to reduce both the time and space requirement is the parameter space decomposition. In this paper, we present two methods for the detection of ellipses based on the straight line Hough transform (SLHT).

The SLHT of a curve in the θ-π space can be expressed as the sum of two terms, namely, the translation term, and the intrinsic term. One useful property of this representation is that it allows the translation, rotation and intrinsic parametersof the curve be separated easily. Timing performance of the proposed methods compares favorably with the other Hough-based methods.  相似文献   


5.
Reverse nearest neighbor (RNN) search is very crucial in many real applications. In particular, given a database and a query object, an RNN query retrieves all the data objects in the database that have the query object as their nearest neighbors. Often, due to limitation of measurement devices, environmental disturbance, or characteristics of applications (for example, monitoring moving objects), data obtained from the real world are uncertain (imprecise). Therefore, previous approaches proposed for answering an RNN query over exact (precise) database cannot be directly applied to the uncertain scenario. In this paper, we re-define the RNN query in the context of uncertain databases, namely probabilistic reverse nearest neighbor (PRNN) query, which obtains data objects with probabilities of being RNNs greater than or equal to a user-specified threshold. Since the retrieval of a PRNN query requires accessing all the objects in the database, which is quite costly, we also propose an effective pruning method, called geometric pruning (GP), that significantly reduces the PRNN search space yet without introducing any false dismissals. Furthermore, we present an efficient PRNN query procedure that seamlessly integrates our pruning method. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed GP-based PRNN query processing approach, under various experimental settings.  相似文献   

6.
Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclusion of noise, inconsistent supply voltage and delay or loss of data in transfer. In order to manage, query or mine such data, data uncertainty needs to be considered. Hence,this paper studies the problem of top-k distance-based outlier detection from uncertain data objects. In this work, an uncertain object is modelled by a probability density function of a Gaussian distribution. The naive approach of distance-based outlier detection makes use of nested loop. This approach is very costly due to the expensive distance function between two uncertain objects. Therefore,a populated-cells list(PC-list) approach of outlier detection is proposed. Using the PC-list, the proposed top-k outlier detection algorithm needs to consider only a fraction of dataset objects and hence quickly identifies candidate objects for top-k outliers. Two approximate top-k outlier detection algorithms are presented to further increase the efficiency of the top-k outlier detection algorithm.An extensive empirical study on synthetic and real datasets is also presented to prove the accuracy, efficiency and scalability of the proposed algorithms.  相似文献   

7.
Due to the pervasive data uncertainty in many real applications, efficient and effective query answering on uncertain data has recently gained much attention from the database community. In this paper, we propose a novel and important query in the context of uncertain databases, namely probabilistic group subspace skyline (PGSS) query, which is useful in applications like sensor data analysis. Specifically, a PGSS query retrieves those uncertain objects that are, with high confidence, not dynamically dominated by other objects, with respect to a group of query points in ad-hoc subspaces. In order to enable fast PGSS query answering, we propose effective pruning methods to reduce the PGSS search space, which are seamlessly integrated into an efficient PGSS query procedure. Furthermore, to achieve low query cost, we provide a cost model, in light of which uncertain data are pre-processed and indexed. Extensive experiments have been conducted to demonstrate the efficiency and effectiveness of our proposed approaches.  相似文献   

8.
路面检测是智能汽车领域的一个重要研究课题。基于学习的方法将获取的汽车前方的图像划分为一些区域,然后分别将这些区域分类为路面区域或非路面区域。由于现实场景的复杂性,存在一些既包含路面又包含非路面的不确定区域,只是将其分类为路面区域或非路面区域是不合理的。针对上述问题,提出了一种新的基于分割的路面检测算法,其核心是不确定区域再分类算法RCUR(Re-classification on Uncertain Regions)。该算法检测出不确定区域后,利用不同分割算法的互补性将不确定区域分割为若干子区域,通过对子区域的组合、分类可以有效地区分出不确定区域中的路面与非路面部分。实验表明该算法能够在现实场景中适应路面的多样性,提高路面检测的正确率,降低噪声对路面检测结果的影响。  相似文献   

9.
This paper presents the MOUGH (mixture of uniform and Gaussian Hough) Transform for shape-based object detection and tracking. We show that the edgels of a rigid object at a given orientation are approximately distributed according to a Gaussian mixture model (GMMs). A variant of the generalized Hough transform is proposed, voting using GMMs and optimized via Expectation-Maximization, that is capable of searching images for a mildly-deformable shape, based on a training dataset of (possibly noisy) images with only crude estimates of scale and centroid of the object in each image. Further modifications are proposed to optimize the algorithm for tracking. The method is able to locate and track objects reliably even against complex backgrounds such as dense moving foliage, and with a moving camera. Experimental results indicate that the algorithm is superior to previously published variants of the Hough transform and to active shape models in tracking pedestrians from a side view.  相似文献   

10.
基于Hough变换的圆检测方法   总被引:11,自引:1,他引:11  
总结了圆检测的几种常用方法,如经典HT、随机HT和广义HT.结合几种方法的优缺点,提出了一种基于经典HT的改进Hough变换圆检测方法.该方法先对图像进行预处理,如灰度化、去噪滤波、边缘检测以及运用数学形态学等,然后进行Hough变换.其主要思想是用多维数组来代替经典的循环过程.把Hough变换应用到织物防水性能自动测试的真实图像中,通过对经典Hough变换与改进后的Hough变换的比较,可以看出检测速度有所提高,检测精度也达到了令人满意的程度.  相似文献   

11.
Clustering of data in an uncertain environment can result into different partitions of the data at different points in time. Therefore, the initial formed clusters of non-stationary data can adapt over time which means that feature vectors associated with different clusters can follow different migration types to and from other clusters. This paper investigates different data migration types and proposes a technique to generate artificial non-stationary data which follows different migration types. Furthermore, the paper proposes clustering performance measures which are more applicable to measure the clustering quality in a non-stationary environment compared to the clustering performance measures for stationary environments. The proposed clustering performance measures in this paper are then used to compare the clustering results of three network based artificial immune models, since the adaptability and self-organising behaviour of the natural immune system inspired the modelling of network based artificial immune models for clustering of non-stationary data.  相似文献   

12.
Spatial attributes are important factors for predicting customer behavior. However, thorough studies on this subject have never been carried out. This paper presents a new idea that incorporates spatial predicates describing the spatial relationships between customer locations and surrounding objects into customer attributes. More specifically, we developed two algorithms in order to achieve spatially enabled customer segmentation. First, a novel filtration algorithm is proposed that can select more relevant predicates from the huge amounts of spatial predicates than existing filtration algorithms. Second, since spatial predicates fundamentally involve some uncertainties, a rough set-based spatial data classification algorithm is developed to handle the uncertainties and therefore provide effective spatial data classification. A series of experiments were conducted and the results indicate that our proposed methods are superior to existing methods for data classification.  相似文献   

13.
Towards a system for automatic facial feature detection   总被引:21,自引:0,他引:21  
A model-based methodology is proposed to detect facial features from a front-view ID-type picture. The system is composed of three modules: context (i.e. face location), eye, and mouth. The context module is a low resolution module which defines a face template in terms of intensity valley regions. The valley regions are detected using morphological filtering and 8-connected blob coloring. The objective is to generate a list of hypothesized face locations ranked by face likelihood. The detailed analysis is left for the high resolution eye and mouth modules. The aim for both is to confirm as well as refine the locations and shapes of their respective features of interest. The detection is done via a two-step modelling approach based on the Hough transform and the deformable template technique. The results show that facial features can be located very quickly with Adequate or better fit in over 80% of the images with the proposed system.  相似文献   

14.
We develop an ellipse detection algorithm based on the multisets mixture learning (MML) that differs from the conventional Hough transform perspective. The algorithm developed has potential advantages in terms of noise resistance, incomplete ellipse detection, and detecting a multitude of ellipses.  相似文献   

15.
Increasingly large amount of multidimensional data are being generated on a daily basis in many applications. This leads to a strong demand for learning algorithms to extract useful information from these massive data. This paper surveys the field of multilinear subspace learning (MSL) for dimensionality reduction of multidimensional data directly from their tensorial representations. It discusses the central issues of MSL, including establishing the foundations of the field via multilinear projections, formulating a unifying MSL framework for systematic treatment of the problem, examining the algorithmic aspects of typical MSL solutions, and categorizing both unsupervised and supervised MSL algorithms into taxonomies. Lastly, the paper summarizes a wide range of MSL applications and concludes with perspectives on future research directions.  相似文献   

16.
Omni-directional sensors are useful in obtaining a 360° field-of-view. With a radially symmetric mirror and conventional lens system this can be achieved with a single camera. There are several proposed profiles for the mirror, but most violate the single viewpoint (SVP) criteria necessary to allow functional equivalence to the standard perspective projection, posing challenges that have not yet been addressed in the literature. Such a imaging system with a non-SVP optical system do not benefit from the affine quality of straight line features being represented as collinear points in the image plane. To utilize these non-SVP mirrors, a new method to recognize such features is required. This work describes an approach to detecting features in panoramic non-SVP images using a modified Hough transform. A mathematical model for this feature extraction process is given. Experimental results are presented to validate this model and show robust performance in identifying line features with only estimated calibration.  相似文献   

17.
本文提出一种利用Hough变换作形状检测的方法,称为极标编码多分辨率Hough变换,它将模板和图象都用极坐标表示为一维序列,这不但简化了Hough变换的映射运算,而且借此可以构成图象空间和参数空间对等的多分辨率描述,使检测可由低分辨率向高分辨率以一种类似树搜索的方式高效地实现.文中给出了实验结果.  相似文献   

18.
用两步Hough变换检测圆   总被引:1,自引:0,他引:1  
赵京东 《计算机应用》2008,28(7):1761-1763
Hough变换在图像处理中占有重要地位,是一种检测曲线的有效方法。但使用传统的Hough变换来检测圆,具有存储空间大计算时间长的缺点。为此提出了采用两步Hough变换的圆检测方法,利用圆的斜率特性,降低了Hough参数空间的维度,提高了运算效率,并推广到椭圆的检测之中。  相似文献   

19.
基于Hough森林的对象检测是隐式形状模型(ISM)的改进,它借助随机森林完成广义Hough变换。为了进一步提高其检测效果,充分利用训练图像中对象位置是已知的知识,改进了经典的偏移量不确定性度量方法,并优化随机森林的投票,使在Hough空间中真正对象的位置获得更多投票和更高的投票值。实验验证了该方法相比于经典的方法,具有更准确的对象检测效果。  相似文献   

20.
In this paper we present a new credal classification rule (CCR) based on belief functions to deal with the uncertain data. CCR allows the objects to belong (with different masses of belief) not only to the specific classes, but also to the sets of classes called meta-classes which correspond to the disjunction of several specific classes. Each specific class is characterized by a class center (i.e. prototype), and consists of all the objects that are sufficiently close to the center. The belief of the assignment of a given object to classify with a specific class is determined from the Mahalanobis distance between the object and the center of the corresponding class. The meta-classes are used to capture the imprecision in the classification of the objects when they are difficult to correctly classify because of the poor quality of available attributes. The selection of meta-classes depends on the application and the context, and a measure of the degree of indistinguishability between classes is introduced. In this new CCR approach, the objects assigned to a meta-class should be close to the center of this meta-class having similar distances to all the involved specific classes? centers, and the objects too far from the others will be considered as outliers (noise). CCR provides robust credal classification results with a relatively low computational burden. Several experiments using both artificial and real data sets are presented at the end of this paper to evaluate and compare the performances of this CCR method with respect to other classification methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号