首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.  相似文献   

2.
In big data era, more and more data are collected from multiple views, each of which reflect distinct perspectives of the data. Many multi-view data are accompanied by incompatible views and high dimension, both of which bring challenges for multi-view clustering. This paper proposes a strategy of simultaneous weighting on view and feature to discriminate their importance. Each feature of multi-view data is given bi-level weights to express its importance in feature level and view level, respectively. Furthermore, we implements the proposed weighting method in the classical k-means algorithm to conduct multi-view clustering task. An efficient gradient-based optimization algorithm is embedded into k-means algorithm to compute the bi-level weights automatically. Also, the convergence of the proposed weight updating method is proved by theoretical analysis. In experimental evaluation, synthetic datasets with varied noise and missing-value are created to investigate the robustness of the proposed approach. Then, the proposed approach is also compared with five state-of-the-art algorithms on three real-world datasets. The experiments show that the proposed method compares very favourably against the other methods.  相似文献   

3.
Speech and speaker recognition is an important topic to be performed by a computer system. In this paper, an expert speaker recognition system based on optimum wavelet packet entropy is proposed for speaker recognition by using real speech/voice signal. This study contains both the combination of the new feature extraction and classification approach by using optimum wavelet packet entropy parameter values. These optimum wavelet packet entropy values are obtained from measured real English language speech/voice signal waveforms using speech experimental set. A genetic-wavelet packet-neural network (GWPNN) model is developed in this study. GWPNN includes three layers which are genetic algorithm, wavelet packet and multi-layer perception. The genetic algorithm layer of GWPNN is used for selecting the feature extraction method and obtaining the optimum wavelet entropy parameter values. In this study, one of the four different feature extraction methods is selected by using genetic algorithm. Alternative feature extraction methods are wavelet packet decomposition, wavelet packet decomposition – short-time Fourier transform, wavelet packet decomposition – Born–Jordan time–frequency representation, wavelet packet decomposition – Choi–Williams time–frequency representation. The wavelet packet layer is used for optimum feature extraction in the time–frequency domain and is composed of wavelet packet decomposition and wavelet packet entropies. The multi-layer perceptron of GWPNN, which is a feed-forward neural network, is used for evaluating the fitness function of the genetic algorithm and for classification speakers. The performance of the developed system has been evaluated by using noisy English speech/voice signals. The test results showed that this system was effective in detecting real speech signals. The correct classification rate was about 85% for speaker classification.  相似文献   

4.
5.
Vector quantization(VQ) can perform efficient feature extraction from electrocardiogram (ECG) with the advantages of dimensionality reduction and accuracy increase. However, the existing dictionary learning algorithms for vector quantization are sensitive to dirty data, which compromises the classification accuracy. To tackle the problem, we propose a novel dictionary learning algorithm that employs k-medoids cluster optimized by k-means++ and builds dictionaries by searching and using representative samples, which can avoid the interference of dirty data, and thus boost the classification performance of ECG systems based on vector quantization features. We apply our algorithm to vector quantization feature extraction for ECG beats classification, and compare it with popular features such as sampling point feature, fast Fourier transform feature, discrete wavelet transform feature, and with our previous beats vector quantization feature. The results show that the proposed method yields the highest accuracy and is capable of reducing the computational complexity of ECG beats classification system. The proposed dictionary learning algorithm provides more efficient encoding for ECG beats, and can improve ECG classification systems based on encoded feature.  相似文献   

6.
This paper proposes a new method to weight subspaces in feature groups and individual features for clustering high-dimensional data. In this method, the features of high-dimensional data are divided into feature groups, based on their natural characteristics. Two types of weights are introduced to the clustering process to simultaneously identify the importance of feature groups and individual features in each cluster. A new optimization model is given to define the optimization process and a new clustering algorithm FG-k-means is proposed to optimize the optimization model. The new algorithm is an extension to k-means by adding two additional steps to automatically calculate the two types of subspace weights. A new data generation method is presented to generate high-dimensional data with clusters in subspaces of both feature groups and individual features. Experimental results on synthetic and real-life data have shown that the FG-k-means algorithm significantly outperformed four k-means type algorithms, i.e., k-means, W-k-means, LAC and EWKM in almost all experiments. The new algorithm is robust to noise and missing values which commonly exist in high-dimensional data.  相似文献   

7.
Intrusion detection is a necessary step to identify unusual access or attacks to secure internal networks. In general, intrusion detection can be approached by machine learning techniques. In literature, advanced techniques by hybrid learning or ensemble methods have been considered, and related work has shown that they are superior to the models using single machine learning techniques. This paper proposes a hybrid learning model based on the triangle area based nearest neighbors (TANN) in order to detect attacks more effectively. In TANN, the k-means clustering is firstly used to obtain cluster centers corresponding to the attack classes, respectively. Then, the triangle area by two cluster centers with one data from the given dataset is calculated and formed a new feature signature of the data. Finally, the k-NN classifier is used to classify similar attacks based on the new feature represented by triangle areas. By using KDD-Cup ’99 as the simulation dataset, the experimental results show that TANN can effectively detect intrusion attacks and provide higher accuracy and detection rates, and the lower false alarm rate than three baseline models based on support vector machines, k-NN, and the hybrid centroid-based classification model by combining k-means and k-NN.  相似文献   

8.
Detection of mild laryngeal disorders using acoustic parameters of human voice is the main objective in this study. Observations of sustained phonation (audio recordings of vocalized /a/) are labeled by clinical diagnosis and rated by severity (from 0 to 3). Research is exclusively constrained to healthy (severity 0) and mildly pathological (severity 1) cases – two the most difficult classes to distinguish between.Comprehensive voice signal characterization and information fusion constitute the approach adopted here. Characterization is obtained through diverse feature set, containing 26 feature subsets of varying size, extracted from the voice signal. Usefulness of feature-level and decision-level fusion is explored using support vector machine (SVM) and random forest (RF) as basic classifiers. For both types of fusion we also investigate the influence of feature selection on model accuracy. To improve the decision-level fusion we introduce a simple unsupervised technique for ensemble design, which is based on partitioning the feature set by k-means clustering, where the parameter k controls the size and diversity of the prospective ensemble.All types of the fusion resulted in an evident improvement over the best individual feature subset. However, none of the types, including fusion setups comprising feature selection, proved to be significantly superior over the rest. The proposed ensemble design by feature set decomposition discernibly enhanced decision-level and significantly outperformed feature-level fusion. Ensemble of RF classifiers, induced from a cluster-based partitioning of the feature set, achieved equal error rate of 13.1 ± 1.8% in the detection of mildly pathological larynx. This is a very encouraging result, considering that detection of mild laryngeal disorder is a more challenging task than a common discrimination between healthy and a wide spectrum of pathological cases.  相似文献   

9.
尚敬文  王朝坤  辛欣  应翔 《软件学报》2017,28(3):648-662
社区结构是复杂网络的一个重要特征,社区发现对研究网络结构有重要的应用价值.k-均值等经典聚类算法是解决社区发现问题的一类基本方法.然而,在处理网络的高维矩阵时,使用这些经典聚类方法得到的社区往往不够准确.提出一种基于深度稀疏自动编码器的社区发现算法CoDDA,尝试提高使用这些经典方法处理高维邻接矩阵进行社区发现的准确性.首先,提出基于跳数的处理方法,对稀疏的邻接矩阵进行优化处理.得到的相似度矩阵不仅能反映网络拓扑结构中相连节点间的相似关系,同时能反映不相连节点间的相似关系.接着,基于无监督深度学习方法,构建深度稀疏自动编码器,对相似度矩阵进行特征提取,得到低维的特征矩阵.与邻接矩阵相比,特征矩阵对网络拓扑结构有更强的特征表达能力.最后,使用k-均值算法对低维特征矩阵聚类得到社区结构.实验结果显示,与6种典型的社区发现算法相比,CoDDA算法能够发现更准确的社区结构.同时,参数实验结果显示,CoDDA算法发现的社区结构比直接使用高维邻接矩阵的基本k-均值算法发现的社区结构更为准确.  相似文献   

10.
In this study, a hierarchical electroencephalogram (EEG) classification system for epileptic seizure detection is proposed. The system includes the following three stages: (i) original EEG signals representation by wavelet packet coefficients and feature extraction using the best basis-based wavelet packet entropy method, (ii) cross-validation (CV) method together with k-Nearest Neighbor (k-NN) classifier used in the training stage to hierarchical knowledge base (HKB) construction, and (iii) in the testing stage, computing classification accuracy and rejection rate using the top-ranked discriminative rules from the HKB. The data set is taken from a publicly available EEG database which aims to differentiate healthy subjects and subjects suffering from epilepsy diseases. Experimental results show the efficiency of our proposed system. The best classification accuracy is about 100% via 2-, 5-, and 10-fold cross-validation, which indicates the proposed method has potential in designing a new intelligent EEG-based assistance diagnosis system for early detection of the electroencephalographic changes.  相似文献   

11.

In the fields of pattern recognition and machine learning, the use of data preprocessing algorithms has been increasing in recent years to achieve high classification performance. In particular, it has become inevitable to use the data preprocessing method prior to classification algorithms in classifying medical datasets with the nonlinear and imbalanced data distribution. In this study, a new data preprocessing method has been proposed for the classification of Parkinson, hepatitis, Pima Indians, single proton emission computed tomography (SPECT) heart, and thoracic surgery medical datasets with the nonlinear and imbalanced data distribution. These datasets were taken from UCI machine learning repository. The proposed data preprocessing method consists of three steps. In the first step, the cluster centers of each attribute were calculated using k-means, fuzzy c-means, and mean shift clustering algorithms in medical datasets including Parkinson, hepatitis, Pima Indians, SPECT heart, and thoracic surgery medical datasets. In the second step, the absolute differences between the data in each attribute and the cluster centers are calculated, and then, the average of these differences is calculated for each attribute. In the final step, the weighting coefficients are calculated by dividing the mean value of the difference to the cluster centers, and then, weighting is performed by multiplying the obtained weight coefficients by the attribute values in the dataset. Three different attribute weighting methods have been proposed: (1) similarity-based attribute weighting in k-means clustering, (2) similarity-based attribute weighting in fuzzy c-means clustering, and (3) similarity-based attribute weighting in mean shift clustering. In this paper, we aimed to aggregate the data in each class together with the proposed attribute weighting methods and to reduce the variance value within the class. Thus, by reducing the value of variance in each class, we have put together the data in each class and at the same time, we have further increased the discrimination between the classes. To compare with other methods in the literature, the random subsampling has been used to handle the imbalanced dataset classification. After attribute weighting process, four classification algorithms including linear discriminant analysis, k-nearest neighbor classifier, support vector machine, and random forest classifier have been used to classify imbalanced medical datasets. To evaluate the performance of the proposed models, the classification accuracy, precision, recall, area under the ROC curve, κ value, and F-measure have been used. In the training and testing of the classifier models, three different methods including the 50–50% train–test holdout, the 60–40% train–test holdout, and tenfold cross-validation have been used. The experimental results have shown that the proposed attribute weighting methods have obtained higher classification performance than random subsampling method in the handling of classifying of the imbalanced medical datasets.

  相似文献   

12.
By using a kernel function, data that are not easily separable in the original space can be clustered into homogeneous groups in the implicitly transformed high-dimensional feature space. Kernel k-means algorithms have recently been shown to perform better than conventional k-means algorithms in unsupervised classification. However, few reports have examined the benefits of using a kernel function and the relative merits of the various kernel clustering algorithms with regard to the data distribution. In this study, we reformulated four representative clustering algorithms based on a kernel function and evaluated their performances for various data sets. The results indicate that each kernel clustering algorithm gives markedly better performance than its conventional counterpart for almost all data sets. Of the kernel clustering algorithms studied in the present work, the kernel average linkage algorithm gives the most accurate clustering results.  相似文献   

13.
This study compares the performances of different methods for the differentiation and localization of commonly encountered features in indoor environments. Differentiation of such features is of interest for intelligent systems in a variety of applications such as system control based on acoustic signal detection and identification, map building, navigation, obstacle avoidance, and target tracking. Different representations of amplitude and time-of-flight measurement patterns experimentally acquired from a real sonar system are processed. The approaches compared in this study include the target differentiation algorithm, Dempster-Shafer evidential reasoning, different kinds of voting schemes, statistical pattern recognition techniques (k-nearest neighbor classifier, kernel estimator, parameterized density estimator, linear discriminant analysis, and fuzzy c-means clustering algorithm), and artificial neural networks. The neural networks are trained with different input signal representations obtained using pre-processing techniques such as discrete ordinary and fractional Fourier, Hartley and wavelet transforms, and Kohonen's self-organizing feature map. The use of neural networks trained with the back-propagation algorithm, usually with fractional Fourier transform or wavelet pre-processing results in near perfect differentiation, around 85% correct range estimation and around 95% correct azimuth estimation, which would be satisfactory in a wide range of applications.  相似文献   

14.
声学分析是一种非常有前景的嗓音病理诊断方法,它采用连续小波分析方法提取嗓音特征参数.文章提出了一种基于SVM的病态嗓音分类算法,通过选择径向基函数RBF,可使分类的正确率达到97%.  相似文献   

15.
We present an evaluation and comparison of the performance of four different texture and shape feature extraction methods for classification of benign and malignant microcalcifications in mammograms. For 103 regions containing microcalcification clusters, texture and shape features were extracted using four approaches: conventional shape quantifiers; co-occurrence-based method of Haralick; wavelet transformations; and multi-wavelet transformations. For each set of features, most discriminating features and their optimal weights were found using real-valued and binary genetic algorithms (GA) utilizing a k-nearest-neighbor classifier and a malignancy criterion for generating ROC curves for measuring the performance. The best set of features generated areas under the ROC curve ranging from 0.84 to 0.89 when using real-valued GA and from 0.83 to 0.88 when using binary GA. The multi-wavelet method outperformed the other three methods, and the conventional shape features were superior to the wavelet and Haralick features.  相似文献   

16.
A k-means clustering algorithm for designing binary tree classifiers is introduced for the classification of cervical cells. At each nonterminal node of the designed binary tree classifier, two sets of effective feature are selected: one is based on the Bhattacharyya distance, a measure of separability between two classes; the other is based on the merits of classification accuracy. The classification result has shown the effectiveness of the features and the binary tree classifier used.  相似文献   

17.
In the field of multimedia retrieval in video, text frame classification is essential for text detection, event detection, event boundary detection, etc. We propose a new text frame classification method that introduces a combination of wavelet and median moment with k-means clustering to select probable text blocks among 16 equally sized blocks of a video frame. The same feature combination is used with a new Max-Min clustering at the pixel level to choose probable dominant text pixels in the selected probable text blocks. For the probable text pixels, a so-called mutual nearest neighbor based symmetry is explored with a four-quadrant formation centered at the centroid of the probable dominant text pixels to know whether a block is a true text block or not. If a frame produces at least one true text block then it is considered as a text frame otherwise it is a non-text frame. Experimental results on different text and non-text datasets including two public datasets and our own created data show that the proposed method gives promising results in terms of recall and precision at the block and frame levels. Further, we also show how existing text detection methods tend to misclassify non-text frames as text frames in term of recall and precision at both the block and frame levels.  相似文献   

18.
Effective and efficient texture feature extraction and classification is an important problem in image understanding and recognition. Recently, texton learning based texture classification approaches have been widely studied, where the textons are usually learned via K-means clustering or sparse coding methods. However, the K-means clustering is too coarse to characterize the complex feature space of textures, while sparse texton learning/encoding is time-consuming due to the l0-norm or l1-norm minimization. Moreover, these methods mostly compute the texton histogram as the statistical features for classification, which may not be effective enough. This paper presents an effective and efficient texton learning and encoding scheme for texture classification. First, a regularized least square based texton learning method is developed to learn the dictionary of textons class by class. Second, a fast two-step l2-norm texton encoding method is proposed to code the input texture feature over the concatenated dictionary of all classes. Third, two types of histogram features are defined and computed from the texton encoding outputs: coding coefficients and coding residuals. Finally, the two histogram features are combined for classification via a nearest subspace classifier. Experimental results on the CUReT, KTH_TIPS and UIUC datasets demonstrated that the proposed method is very promising, especially when the number of available training samples is limited.  相似文献   

19.
目的 高光谱图像波段数目巨大,导致在解译及分类过程中出现“维数灾难”的现象。针对该问题,在K-means聚类算法基础上,考虑各个波段对不同聚类的重要程度,同时顾及类间信息,提出一种基于熵加权K-means全局信息聚类的高光谱图像分类算法。方法 首先,引入波段权重,用来刻画各个波段对不同聚类的重要程度,并定义熵信息测度表达该权重。其次,为避免局部最优聚类,引入类间距离测度实现全局最优聚类。最后,将上述两类测度引入K-means聚类目标函数,通过最小化目标函数得到最优分类结果。结果 为了验证提出的高光谱图像分类方法的有效性,对Salinas高光谱图像和Pavia University高光谱图像标准图中的地物类别根据其光谱反射率差异程度进行合并,将合并后的标准图作为新的标准分类图。分别采用本文算法和传统K-means算法对Salinas高光谱图像和Pavia University高光谱图像进行实验,并定性、定量地评价和分析了实验结果。对于图像中合并后的地物类别,光谱反射率差异程度大,从视觉上看,本文算法较传统K-means算法有更好的分类结果;从分类精度看,本文算法的总精度分别为92.20%和82.96%, K-means算法的总精度分别为83.39%和67.06%,较K-means算法增长8.81%和15.9%。结论 提出一种基于熵加权K-means全局信息聚类的高光谱图像分类算法,实验结果表明,本文算法对高光谱图像中具有不同光谱反射率差异程度的各类地物目标均能取得很好的分类结果。  相似文献   

20.
This paper describes feature extraction methods using higher order statistics (HOS) of wavelet packet decomposition (WPD) coefficients for the purpose of automatic heartbeat recognition. The method consists of three stages. First, the wavelet package coefficients (WPC) are calculated for each different type of ECG beat. Then, higher order statistics of WPC are derived. Finally, the obtained feature set is used as input to a classifier, which is based on k-NN algorithm. The MIT-BIH arrhythmia database is used to obtain the ECG records used in this study. All heartbeats in the arrhythmia database are grouped into five main heartbeat classes. The classification accuracy of the proposed system is measured by average sensitivity of 90%, average selectivity of 92% and average specificity of 98%. The results show that HOS of WPC as features are highly discriminative for the classification of different arrhythmic ECG beats.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号