共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, various methods are introduced for improving the ability of fuzzy classifier systems to automatically generate fuzzy if-then rules for pattern classification problems with continuous attributes. First, we describe a simple fuzzy classifier system where a randomly generated initial population of fuzzy if-then rules is evolved by typical genetic operations, such as selection, crossover, and mutation. By computer simulations on a real-world pattern classification problem with many continuous attributes, we show that the search ability of such a simple fuzzy classifier system is not high. Next, we examine the search ability of a hybrid algorithm where a learning procedure of fuzzy if-then rules is combined with the fuzzy classifier system. Then, we introduce two heuristic procedures for improving the performance of the fuzzy classifier system. One is a heuristic rule generation procedure for an initial population where initial fuzzy if-then rules are directly generated from training patterns. The other is a heuristic population update procedure where new fuzzy if-then rules are generated from misclassified and rejected training patterns, as well as from existing fuzzy if-then rules by genetic operations. By computer simulations, we demonstrate that these two heuristic procedures drastically improve the search ability of the fuzzy classifier system. We also examine a variant of the fuzzy classifier system where the population size (i.e., the number of fuzzy if-then rules) varies depending on the classification performance of fuzzy if-then rules in the current population 相似文献
2.
Speed-up fractal image compression with a fuzzy classifier 总被引:4,自引:0,他引:4
This paper presents a fractal image compression scheme incorporated with a fuzzy classifier that is optimized by a genetic algorithm. The fractal image compression scheme requires to find matching range blocks to domain blocks from all the possible division of an image into subblocks. With suitable classification of the subblocks by a fuzzy classifier we can reduce the search time for this matching process so as to speedup the encoding process in the scheme. Implementation results show that by introducing three image classes and using fuzzy classifier optimized by a genetic algorithm the encoding process can be speedup by about 40% of an unclassified encoding system. 相似文献
3.
With the increase in user mobility the most challenging issues are data scheduling which premises its users high quality of service in the context of the interoperability for microwave access in WiMAX for vehicular ad-hoc network (VANET). There is no complete proof of existing techniques accessible to provide better quality as there is a starvation problem due to the uncertainty in decision process with imprecise data. In VANET for the sake of interest while vehicles started increasing more and more in network traffic this paper devised a two stage Optimized priority scheduling scheme known as Evolving Intuitionistic Fuzzy Priority Classifier with Bio-inspiration Based Scheduling Scheme. This work takes into account the hesitation degree of each factor for priority and the bio-inspiration based classification. Through our simulation, it is shown that the projected proposal can work to acclimatize and make competent to improve the existing VANET approaches in terms of high spectrum effectiveness and low outage probability. 相似文献
4.
A cost-effective semisupervised classifier approach with kernels 总被引:3,自引:0,他引:3
In this paper, we propose a cost-effective iterative semisupervised classifier based on a kernel concept. The proposed technique incorporates unlabeled data into the design of a binary classifier by introducing and optimizing a cost function in a feature space that maximizes the Rayleigh coefficient while minimizing the total cost associated with misclassified labeled samples. The cost assigned to misclassified labeled samples accounts for the number of misclassified labeled samples as well as the amount by which they are on the wrong side of the boundary, and this counterbalances any potential adverse effect of unlabeled data on the classifier performance. Several experiments performed with remotely sensed data demonstrate that using the proposed semisupervised classifier shows considerable improvements over the supervised-only counterpart. 相似文献
5.
A global optimization method is introduced that minimize the rate of misclassification. We first derive the theoretical basis for the method, on which we base the development of a novel design algorithm and demonstrate its effectiveness and superior performance in the design of practical classifiers for some of the most popular structures currently in use. The method, grounded in ideas from statistical physics and information theory, extends the deterministic annealing approach for optimization, both to incorporate structural constraints on data assignments to classes and to minimize the probability of error as the cost objective. During the design, data are assigned to classes in probability so as to minimize the expected classification error given a specified level of randomness, as measured by Shannon's entropy. The constrained optimization is equivalent to a free-energy minimization, motivating a deterministic annealing approach in which the entropy and expected misclassification cost are reduced with the temperature while enforcing the classifier's structure. In the limit, a hard classifier is obtained. This approach is applicable to a variety of classifier structures, including the widely used prototype-based, radial basis function, and multilayer perceptron classifiers. The method is compared with learning vector quantization, back propagation (BP), several radial basis function design techniques, as well as with paradigms for more directly optimizing all these structures to minimize probability of error. The annealing method achieves significant performance gains over other design methods on a number of benchmark examples from the literature, while often retaining design complexity comparable with or only moderately greater than that of strict descent methods. Substantial gains, both inside and outside the training set, are achieved for complicated examples involving high-dimensional data and large class overlap 相似文献
6.
A charge-based fixed-weight neural Hamming classifier with an on-chip normalization facility is described. The classifier utilizes a purely capacitive synapse matrix for quantization and a multiport sense amplifier for discrimination. The discriminator is compatible with variable-weight synapses as well. A detailed analysis of the classifier configuration is presented; design issues are addressed, and limitations are identified. It is shown that the ratio of the maximum Hamming weight to the minimum Hamming distance that can be handled by the classifier has an upper bound. As long as the exemplars comply with this upper bound, the network does not impose any limitation on the word length. A very large exemplar count, on the other hand, can impair connection density, but this problem can be averted by using multiple discriminators. A 2-μm p-well CMOS test chip containing a Hamming classifier of ten 20-b-long exemplars is described 相似文献
7.
Bone scintigraphy is an effective method to diagnose bone diseases such as bone tumors. In the scintigraphic images, bone abnormalities are widely scattered on the whole body. Conventionally, radiologists visually check the whole-body images and find the distributed abnormalities based on their expertise. This manual process is time-consuming and it is not unusual to miss some abnormalities. In this paper, a computer-aided diagnosis (CAD) system is proposed to assist radiologists in the diagnosis of bone scintigraphy. The system will provide warning marks and abnormal scores on some locations of the images to direct radiologists' attention toward these locations. A fuzzy system called characteristic-point-based fuzzy inference system (CPFIS) is employed to implement the diagnosis system and three minimizations are used to systematically train the CPFIS. Asymmetry and brightness are chosen as the two inputs to the CPFIS according to radiologists' knowledge. The resulting CAD system is of a small-sized rule base such that the resulting fuzzy rules can be not only easily understood by radiologists, but also matched to and compared with their expert knowledge. The prototype CAD system was tested on 82 abnormal images and 27 normal images. We employed free-response receiver operating characteristics method with the mean number of false positives (FPs) and the sensitivity as performance indexes to evaluate the proposed system. The sensitivity is 91.5% (227 of 248) and the mean number of FPs is 37.3 per image. The high sensitivity and moderate numbers of FP marks per image shows that the proposed method can provide an effective second-reader information to radiologists in the diagnosis of bone scintigraphy. 相似文献
8.
Jong-Hwan Kim Jong-Hwan Park Seon-Woo Lee Chong E.K.P. 《Industrial Electronics, IEEE Transactions on》1994,41(2):155-162
Existing fuzzy control methods do not perform well when applied to systems containing nonlinearities arising from unknown deadzones. In particular, we show that a usual "fuzzy PD" controller applied to a system with a deadzone suffers from poor transient performance and a large steady-state error. In this paper, we propose a novel two-layered fuzzy logic controller for controlling systems with deadzones. The two-layered control structure consists of a fuzzy logic-based precompensator followed by a usual fuzzy PD controller. Our proposed controller exhibits superior transient and steady-state performance compared to usual fuzzy PD controllers. In addition, the controller is robust to variations in deadzone nonlinearities. We illustrate the effectiveness of our scheme using computer simulation examples.<> 相似文献
9.
We present in this paper a decision tree with a reject option at each node we call a ternary decision tree; principle of its achievement is defined and a new classification rule of extension of the classical k nearest neighbor one is proposed. This method has been applied for the monitoring of the heart of a fast breeder reactor by using the neutronic signal. 相似文献
10.
Bin Chen Guorui Feng Xinpeng Zhang Fengyong Li 《Signal, Image and Video Processing》2014,8(8):1475-1482
This paper proposes a JPEG steganalysis scheme based on the ensemble classifier and high-dimensional feature space. We first combine three current feature sets and remove the unimportant features according to the correlation between different features parts so as to form a new feature space used for steganalysis. This way, the dependencies among cover and steganographic images can be still represented by the features with a reduced dimensionality. Furthermore, we design a proportion mechanism to manage the feature selection in two subspaces for each base learner of the ensemble classifier. Experimental results show that the proposed scheme can effectively defeat the MB and nsF5 steganographic methods and its performance is better than that of existing steganalysis approaches. 相似文献
11.
K.C. Tan Q. Yu T.H. Lee 《IEEE transactions on systems, man and cybernetics. Part C, Applications and reviews》2005,35(2):131-142
This paper presents a distributed coevolutionary classifier (DCC) for extracting comprehensible rules in data mining. It allows different species to be evolved cooperatively and simultaneously, while the computational workload is shared among multiple computers over the Internet. Through the intercommunications among different species of rules and rule sets in a distributed manner, the concurrent processing and computational speed of the coevolutionary classifiers are enhanced. The advantage and performance of the proposed DCC are validated upon various datasets obtained from the UCI machine learning repository. It is shown that the predicting accuracy of DCC is robust and the computation time is reduced as the number of remote engines increases. Comparison results illustrate that the DCC produces good classification rules for the datasets, which are competitive as compared to existing classifiers in literature. 相似文献
12.
A new diagnosis approach for handling tolerance in analog and mixed-signal circuits by using fuzzy math 总被引:8,自引:0,他引:8
Peng Wang Shiyuan Yang 《IEEE transactions on circuits and systems. I, Regular papers》2005,52(10):2118-2127
A novel analysis method for analog circuits test and diagnosis is described in this paper. Diagnosis hypotheses are represented and fuzzy math is used to express the diagnosis hypotheses and strategy. Based on it, new equivalent fault model is presented and used for test node selection and design for test. Especially, parametric fault test for linear analog circuits with tolerance analysis is presented using both sensitivity method and fuzzy analysis method. 相似文献
13.
Barnard E. Cole R.A. Vea M.P. Alleva F.A. 《Signal Processing, IEEE Transactions on》1991,39(2):298-307
Pitch detection based on neural-net classifiers is investigated. To this end, the extent of generalization attainable with neural nets is examined, and the choice of features is discussed. For pitch detection, two feature sets, one based on waveform samples and the other based on properties of waveform peaks, are introduced. Experiments with neural classifiers demonstrate that the latter feature set, which has better invariance properties, performs more successfully. It is found that the best neural-net pitch tracker approaches the level of agreement of human labellers on the same data set, and performs competitively in comparison to a sophisticated feature-based tracker. An analysis of the errors committed by the neural net (relative to the hand labels used for training) reveals that they are mostly due to inconsistent hand labeling of ambiguous waveform peaks 相似文献
14.
In the literature, multiple classifier systems (MCSs) have proved to be a valuable approach to combining classifiers, and under some conditions MCSs are able to mimic ideal Bayesian labeling. This paper focuses on the family of MCSs based on dynamic classifier selection (DCS) and proposes a modification to dynamic classifier selection by local accuracy (DCS-LA). Experiments show that the proposed method outperform MCS strategies based on belief functions and the DCS-LA in terms of minimum and maximum class accuracies and kappa coefficient of agreement and is a valid alternative to majority voting. Moreover, the experiments show that MCSs based on the classification results of classifiers characterized by a low design complexity like maximum likelihood and nearest mean classifiers can yield accuracies that are quite comparable to those of highly optimized classifiers 相似文献
15.
A framework for fuzzy recognition technology 总被引:3,自引:0,他引:3
Larsen H.L. Yager R.R. 《IEEE transactions on systems, man and cybernetics. Part C, Applications and reviews》2000,30(1):65-76
Presents a scheme for object recognition by classificatory problem solving in the framework of fuzzy sets and possibility theory. The scheme has a particular focus on handling the imperfection problems that are common in application domains where the objects to be recognized (detected and identified) represent undesirable situations, referred to as crises. Crises develop over time, and observations typically increase in number and precision as the crisis develops. Early detection and precise recognition of crises is desired, since it increases the possibility of an effective treatment. The crisis recognition problem is central in several areas of decision support, such as medical diagnosis, financial decision making and early warning systems. The problem is characterized by vague knowledge and observations suffering from several kinds of imperfections, such as missing information, imprecision, uncertainty, unreliability of the source, and mutual (possibly conflicting or reinforcing) observations of the same phenomena. The problem of handling possibly imperfect observations from multiple sources includes the problems of information fusion and multiple-sensor data fusion. The different kinds of imperfection are handled in the framework of fuzzy sets and possibility theory 相似文献
16.
This paper introduces a new face coding and recognition method, the enhanced Fisher classifier (EFC), which employs the enhanced Fisher linear discriminant model (EFM) on integrated shape and texture features. Shape encodes the feature geometry of a face while texture provides a normalized shape-free image. The dimensionalities of the shape and the texture spaces are first reduced using principal component analysis, constrained by the EFM for enhanced generalization. The corresponding reduced shape and texture features are then combined through a normalization procedure to form the integrated features that are processed by the EFM for face recognition. Experimental results, using 600 face images corresponding to 200 subjects of varying illumination and facial expressions, show that (1) the integrated shape and texture features carry the most discriminating information followed in order by textures, masked images, and shape images, and (2) the new coding and face recognition method, EFC, performs the best among the eigenfaces method using L(1) or L(2) distance measure, and the Mahalanobis distance classifiers using a common covariance matrix for all classes or a pooled within-class covariance matrix. In particular, EFC achieves 98.5% recognition accuracy using only 25 features. 相似文献
17.
18.
Cresswell M.W. Khera D. Linholm L.W. Schuster C.E. 《Semiconductor Manufacturing, IEEE Transactions on》1992,5(3):255-263
A technique for training an expert system for semiconductor wafer fabrication process diagnosis is described. The technique partitions an existing set of electrically tested semiconductor wafers into groups so that all wafers within each group have similar spatial distributions of the electrical test data across selected die sites. The spatial distribution of test data from the selected die sites on each wafer is referred to as the test pattern of that wafer. A directed graph that is developed by the partitioning algorithm then efficiently classifies a new incoming wafer to one of the groups established during partitioning on the basis of its test pattern. The distribution of known processing histories of wafers within the group to which the new incoming wafer is classified provides a provisional diagnosis of the incoming wafer's process history 相似文献
19.
20.
Newton's method for nonparallel plane proximal classifier with unity norm hyperplanes 总被引:3,自引:0,他引:3
Santanu Ghorai Shaikh Jahangir Hossain Anirban Mukherjee Pranab K. Dutta 《Signal processing》2010,90(1):93-104
In our previous research we observed that the nonparallel plane proximal classifier (NPPC) obtained by minimizing two related regularized quadratic optimization problems performs equally with that of other support vector machine classifiers but with a very lower computational cost. NPPC classifies binary patterns by the proximity of it to one of the two nonparallel hyperplanes. Thus to calculate the distance of a pattern from any hyperplane we need the Euclidean norm of the normal vector of the hyperplane. Alternatively, this should be equal to unity. But in the formulation of NPPC these equality constraints were not considered. Without these constraints the solutions of the objective functions do not guarantee to satisfy the constraints. In this work we have reformulated NPPC by considering those equality constraints and solved it by Newton's method and the solution is updated by solving a system of linear equations by conjugate gradient method. The performance of the reformulated NPPC is verified experimentally on several bench mark and synthetic data sets for both linear and nonlinear classifiers. Apart from the technical improvement of adding those constraints in the NPPC formulation, the results indicate enhanced computational efficiency of nonlinear NPPC on large data sets with the proposed NPPC framework. 相似文献