首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Karhunen-Loêve (K.L.) expansion is a useful tool for the representation, pre-processing and orthogonal coding of multispectral imagery: Each spatial pixel is analysed independently as the K.L. transform is taken in the spectral dimension, i.e. along the various N spectral channels. The eigenvectors are those of the covariance matrix. The (principal) eigenimages are thus “false color” images, which can be viewed without decoding as the spatial topology is unchanged, and the higher order principal images present a strong contrast enhancement.(1) These principal images are also uncorrelated, a very desirable feature for many applications including clustering.(2)

The source dependency of the eigenvectors, however, introduces “instability” in the form of pronounced statistical noise on some principal images. This paper gives the results of a numerical study carried out on a 7 channel Daedalus Multispectral Scene. The uncertainties of the eigenvalues and eigenvectors are evaluated from two “drawings” of the pixels of the raw data.

Both the numerical results of the study and the direct viewing of the principal images show that three out of the seven have so much noise that they do not yield any useful information. Only the first two principal images have excellent stability, and they contain most of the total contrast variance of the scene. Two other principal images of lower order are also stable, but, contribute very little to the total contrast variance. These images carry texture information rather than homogeneous zone clustering information.  相似文献   


2.
The CLADYN compressor banks on the topological properties of a color image to achieve the highest possible data rate reduction with the minimum amount of visible degradation.In the case of numerical color TV, the input data at 166 MB/s is the CCIR imposed luminance Y and chrominance B- Y, RY components. The compressor output is at 25.26 MB/s, exclusive of sound channels and error correcting overhead. High reconstructed picture quality is obtained (at the rate of 6.56/1) without the use of any TV temporal redundancy. In the case of multispectral images, excellent image reconstruction is obtained with a signal to quantization noise ranging from 40 to 50 dBs and with a data rate reduction factor higher than 4/1 for scenes comprising 3–5 spectral channels. This type of compressor is not very sensitive to channel misregistration, is robust to the propagation of the transmission errors and outputs fixed length words.  相似文献   

3.
The design of tree classifiers is considered from the statistical point of view. The procedure for calculating the a posteriori probabilities is decomposed into a sequence of steps. In every step the a posteriori probabilities for a certain subtask of the given pattern recognition task are calculated. The resulting tree classifier realizes a soft-decision strategy in contrast to the hard-decision strategy of the conventional decision tree. At the different nonterminal nodes, mean square polynomial classifiers are applied having the property of estimating the desired a posteriori probabilities together with an integrated feature selection capability.  相似文献   

4.
This article describes an approach to designing a distributed and modular neural classifier. This approach introduces a new hierarchical clustering that enables one to determine reliable regions in the representation space by exploiting supervised information. A multilayer perceptron is then associated with each of these detected clusters and charged with recognizing elements of the associated cluster while rejecting all others. The obtained global classifier is comprised of a set of cooperating neural networks and completed by a K-nearest neighbor classifier charged with treating elements rejected by all the neural networks. Experimental results for the handwritten digit recognition problem and comparison with neural and statistical nonmodular classifiers are given.Received: 1 October 2002, Accepted: 21 November 2002, Published online: 6 June 2003  相似文献   

5.
The general philosophy and motivations of extracting classification information from histograms have been developed in a previously published paper.(1) The content of the present paper deals with an optimum signal theory implementation of the same concepts by appropriate Fourier filtering of the data histogram. This strategy was developed to cluster remotely sensed multispectral imagery.

First, the theoretical foundations of clustering are explained in terms of Watanabe's ‘Ugly Duckling’ theorem on classification. A brief outline of the clustering strategy detailed in Leboucher and Lowitz is then given for the sake of clarity. In the following sections the clustering methodology is explained in terms of signal detection and signal filtering. The mathematical model is then refined and optimized using the prolate spheroidal wave function expansions of Slepian et al.

Computer simulations of this new strategy were conducted on test multispectral imagery and some clustering results are presented.  相似文献   


6.
Arbitrary shape object detection, which is mostly related to computer vision and image processing, deals with detecting objects from an image. In this paper, we consider the problem of detecting arbitrary shape objects as a clustering application by decomposing images into representative data points, and then performing clustering on these points. Our method for arbitrary shape object detection is based on COMUSA which is an efficient algorithm for combining multiple clusterings. Extensive experimental evaluations on real and synthetically generated data sets demonstrate that our method is very accurate and efficient.  相似文献   

7.
This paper deals with the decision rules of a tree classifier for performing the classification at each nonterminal node, under the assumption of complete probabilistic information. For given tree structure and feature subsets to be used, the optimal decision rules (strategy) are derived which minimize the overall probability of misclassification. The primary result is illustrated by an example.  相似文献   

8.
A simple learning algorithm for maximal margin classifiers (also support vector machines with quadratic cost function) is proposed. We build our iterative algorithm on top of the Schlesinger-Kozinec algorithm (S-K-algorithm) from 1981 which finds a maximal margin hyperplane with a given precision for separable data. We suggest a generalization of the S-K-algorithm (i) to the non-linear case using kernel functions and (ii) for non-separable data. The requirement in memory storage is linear to the data. This property allows the proposed algorithm to be used for large training problems.The resulting algorithm is simple to implement and as the experiments showed competitive to the state-of-the-art algorithms. The implementation of the algorithm in Matlab is available. We tested the algorithm on the problem aiming at recognition poor quality numerals.  相似文献   

9.
Enhanced uni-flow counterpropagation networks are used as pattern recognition systems and applied to the identification of chemical structure from corresponding infrared spectra. It is shown that such networks are more suitable for this type of problem than backpropagation networks, both in terms of training times and network performance. The problem of optimum classification between highly similar infrared spectra is addressed, and factors such as training set size, sampling rate, data pre-processing, output data representation and the number of Kohonen layer nodes are considered in this context. It is shown that such networks may achieve rates of correct classification in excess of 90%, although the learning of correct decision boundaries is highly sensitive to the above parameters in cases where the non-informational content of training and test data varies considerably with respect to the informational content, and hence clustering of classes in pattern space is incomplete.  相似文献   

10.
This paper describes an application of the Cascade-Correlation (CC) network to pattern recognition. The pattern recognition task was to simulate an automatic vision inspection system that had to properly classify five different objects. The feature vectors were extracted from 2D images of circularly scanned images and used as inputs for a neural network that was then trained to classify an unknown presented object. The results show that the CC network is viable tool in pattern recognition tasks. It is able to classify partially occluded objects with high accuracy, and to considerably improve classification of noisy images based on simple histogram trimming preprocessing.  相似文献   

11.
The concept of a “mutualistic teacher” is introduced for unsupervised learning of the mean vectors of the components of a mixture of multivariate normal densities, when the number of classes is also unknown. The unsupervised learning problem is formulated here as a multi-stage quasi-supervised problem incorporating a cluster approach. The mutualistic teacher creates a quasi-supervised environment at each stage by picking out “mutual pairs” of samples and assigning identical (but unknown) labels to the individuals of each mutual pair. The number of classes, if not specified, can be determined at an intermediate stage. The risk in assigning identical labels to the individuals of mutual pairs is estimated. Results of some simulation studies are presented.  相似文献   

12.
谢长菊 《计算机仿真》2010,27(4):188-191
支持向量机C-SVM及υ-SVM是目前两种最为成熟的模型,但是从形式到算法、从参数特性到参数含义,它们都相互不同,这给人们的选择带来不便。为了将这两种SVM模型统一起来,提出一种新的模型Cυ-SVM,并依据统计学习理论,研究它的解的特性。给出了新模型解的完备性条件,找出它的解及其相应的算法,并指出了υ/C既是边界支持向量个数的上界,又是支持向量总数的下界。参数设置说明,新模型完全可以实现旧模型的所有功能,而新的算法更加方便诸如文本自动分类等领域的使用。  相似文献   

13.
This paper surveys the applications of thinning in image processing, and examines the difficulties that confront existing thinning algorithms. A fundamental problem is that an algorithm may not be guaranteed to operate successfully on all possible images: in particular, it may not discriminate properly between ‘noise spurs’ and valid limbs, and the skeleton produced may not accurately reflect the shape of the object under scrutiny. Analysis of the situation results in a new, systematic approach to thinning, leading to a family of algorithms able to achieve guaranteed standards of skeleton precision. One algorithm of this family is described in detail.

“There is still no definitely good method for thinning” - Nagao(28)  相似文献   


14.
The importance of suitable distance measures between intuitionistic fuzzy sets (IFSs) arises because of the role they play in the inference problem. A concept closely related to one of distance measures is a divergence measure based on the idea of information-theoretic entropy that was first introduced in communication theory by Shannon (1949). It is known that J-divergence is an important family of divergences. In this paper, we construct J-divergence between IFSs. The proposed J-divergence can induce some useful distance and similarity measures between IFSs. Numerical examples demonstrate that the proposed measures perform well in clustering and pattern recognition.  相似文献   

15.
One of the pattern recognition problems that is of importance in automated cytology is the detection of abnormal cells that may be present in a sample taken from a person. This work describes the problem and some possible solutions within the context of the detection of pre-cancer abnormalities in samples that have been prepared as Pap smears for observation under light microscopy.A cell classification system must be capable of using information concerning the existence of various subclasses of normal and abnormal cells. The classification of multi-modal data can be modeled using the Bayesian decision model with knowledge of the various subclasses composing each of the classes being decided. Two decision rules are shown which are applicable to this problem, and are suitable for the classification required for automated detection of atypical cells in a cervical smear. Test results from a series of holdout experiments indicate that average correct recognition rates of about 85% can be achieved on the atypical cells, while maintaining an error rate of about 1% on the normal cells.  相似文献   

16.
A new scheme for the optimization of codebook sizes for Hidden Markov Models (HMMs) and the generation of HMM ensembles is proposed in this paper. In a discrete HMM, the vector quantization procedure and the generated codebook are associated with performance degradation. By using a selected clustering validity index, we show that the optimization of HMM codebook size can be selected without training HMM classifiers. Moreover, the proposed scheme yields multiple optimized HMM classifiers, and each individual HMM is based on a different codebook size. By using these to construct an ensemble of HMM classifiers, this scheme can compensate for the degradation of a discrete HMM.
Alceu de Souza Britto Jr.Email:
  相似文献   

17.
As humans, we have innate faculties that allow us to efficiently segment groups of objects. Computers, to some degree, can be programmed with similar categorical capabilities, which stem from exploratory data analysis. Out of the various subsets of data reasoning, clustering provides insight into the structure and relationships of input samples situated in a number of distributions. To determine these relationships, many clustering methods rely on one or more human inputs; the most important being the number of distributions, c, to seek. This work investigates a technique for estimating the number of clusters from a general type of data called relational data. Several numerical examples are presented to illustrate the effectiveness of the proposed method.  相似文献   

18.
In case of spatial multi spectral images, such as remotely sensed earth cover, there could be many classes in one entire frame covering a large spatial stretch, because of which meaningful dimensionality reduction cannot perhaps be realizable without trading off with the quality of classification. However most often one would encounter in such images, presence of only a few classes in a small neighborhood, which would enable to devise a very effective dimensionality reduction around that small neighborhood identified as a block. Based on this theme a new method for dimensionality reduction is proposed in this paper.

The method proposed divides the image into uniform non-overlapping windows/blocks. The few features that are essential in discriminating classes in a window are identified. Clustering is performed independently on each of the blocks with the reduced set of features. These clusters in the blocks are later merged to obtain an overall classification of the entire image. The efficacy of the method is corroborated experimentally.  相似文献   


19.
The diff3 System uses image processing and pattern recognition techniques to automatically analyze normal and abnormal white blood cells in a blood smear. The system consists of a spinner which creates a monolayer of cells on a glass slide, a stainer utilizing Wright's stain, the reagents to support the spinner and stainer, and an analyzer for automatic slide handling, analysis and report generation. The analyzer incorporates a wide range of image processing functions, including the generation and storage of gray scale image data, whole-field and partial-field image histogramming, and high-order binary image texture analysis and image transformation using the Golay processor (GLOPR). This paper describes the manner in which these hardware capabilities are used for white cell acquisition, scene segmentation and feature analysis. It concludes with some examples of texture extraction which illustrate the power of the Golay processor as a tool for image analysis.  相似文献   

20.
In recent literature on digital image processing much attention is devoted to the singular value decomposition (SVD) of a matrix. Many authors refer to the Karhunen-Loeve transform (KLT) and principal components analysis (PCA) while treating the SVD. In this paper we give definitions of the three transforms and investigate their relationships. It is shown that in the context of multivariate statistical analysis and statistical pattern recognition the three transforms are very similar if a specific estimate of the column covariance matrix is used. In the context of two-dimensional image processing this similarity still holds if one single matrix is considered. In that approach the use of the names KLT and PCA is rather inappropriate and confusing. If the matrix is considered to be a realization of a two-dimensional random process, the SVD and the two statistically defined transforms differ substantially.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号