首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Acoustical parameters extracted from the recorded voice samples are actively pursued for accurate detection of vocal fold pathology. Most of the system for detection of vocal fold pathology uses high quality voice samples. This paper proposes a hybrid expert system approach to detect vocal fold pathology using the compressed/low quality voice samples which includes feature extraction using wavelet packet transform, clustering based feature weighting and classification. In order to improve the robustness and discrimination ability of the wavelet packet transform based features (raw features), we propose clustering based feature weighting methods including k-means clustering (KMC), fuzzy c-means (FCM) clustering and subtractive clustering (SBC). We have investigated the effectiveness of raw and weighted features (obtained after applying feature weighting methods) using four different classifiers: Least Square Support Vector Machine (LS-SVM) with radial basis kernel, k-means nearest neighbor (kNN) classifier, probabilistic neural network (PNN) and classification and regression tree (CART). The proposed hybrid expert system approach gives a promising classification accuracy of 100% using the feature weighting methods and also it has potential application in remote detection of vocal fold pathology.  相似文献   

2.
Prostate-specific antigen (PSA) is the most widely used serum biomarker for early detection of prostate cancer (PCA). Nevertheless, PSA level can be falsely elevated due to prostatic enlargement, inflammation or infection, which limits the PSA test specificity. The objective of this study is to use a machine learning approach for the analysis of mass spectrometry data to discover more reliable biomarkers that distinguish PCA from benign specimens. Serum samples from 179 prostate cancer patients and 74 benign patients were analyzed. These samples were processed using ProXPRESSION™ Biomarker Enrichment Kits (PerkinElmer). Mass spectra were acquired using a prOTOF™ 2000 matrix-assisted laser desorption/ionization orthogonal time-of-flight (MALDI-O-TOF) mass spectrometer. In this study, we search for potential biomarkers using our feature selection method, the Extended Markov Blanket (EMB). From the new marker selection algorithm, a panel of 26 peaks achieved an accuracy of 80.7%, a sensitivity of 83.5%, a specificity of 74.4%, a positive predictive value (PPV) of 87.9%, and a negative predictive value (NPV) of 68.2%. On the other hand, when PSA alone was used (with a cutoff of 4.0 ng/ml), a sensitivity of 66.7%, a specificity of 53.6%, a PPV of 73.5%, and a NPV of 45.4% were obtained.  相似文献   

3.
Whenever there is any fault in an automotive engine ignition system or changes of an engine condition, an automotive mechanic can conventionally perform an analysis on the ignition pattern of the engine to examine symptoms, based on specific domain knowledge (domain features of an ignition pattern). In this paper, case-based reasoning (CBR) approach is presented to help solve human diagnosis problem using not only the domain features but also the extracted features of signals captured using a computer-linked automotive scope meter. CBR expert system has the advantage that it provides user with multiple possible diagnoses, instead of a single most probable diagnosis provided by traditional network-based classifiers such as multi-layer perceptions (MLP) and support vector machines (SVM). In addition, CBR overcomes the problem of incremental and decremental knowledge update as required by both MLP and SVM. Although CBR is effective, its application for high dimensional domains is inefficient because every instance in a case library must be compared during reasoning. To overcome this inefficiency, a combination of preprocessing methods, such as wavelet packet transforms (WPT), kernel principal component analysis (KPCA) and kernel K-means (KKM) is proposed. Considering the ignition signals captured by a scope meter are very similar, WPT is used for feature extraction so that the ignition signals can be compared with the extracted features. However, there exist many redundant points in the extracted features, which may degrade the diagnosis performance. Therefore, KPCA is employed to perform a dimension reduction. In addition, the number of cases in a case library can be controlled through clustering; KKM is adopted for this purpose. In this paper, several diagnosis methods are also used for comparison including MLP, SVM and CBR. Experimental results showed that CBR using WPT and KKM generated the highest accuracy and fitted better the requirements of the expert system.  相似文献   

4.
Only 30% of patients with elevated serum prostate specific antigen (PSA) levels who undergo prostate biopsy are diagnosed with prostate cancer (PCa). Novel methods are needed to reduce the number of unnecessary biopsies. We report on the identification and validation of a panel of 12 novel biomarkers for prostate cancer (PCaP), using CE coupled MS. The biomarkers could be defined by comparing first void urine of 51 men with PCa and 35 with negative prostate biopsy. In contrast, midstream urine samples did not allow the identification of discriminatory molecules, suggesting that prostatic fluids may be the source of the defined biomarkers. Consequently, first void urine samples were tested for sufficient amounts of prostatic fluid, using a prostatic fluid indicative panel (“informative” polypeptide panel; IPP). A combination of IPP and PCaP to predict positive prostate biopsy was evaluated in a blinded prospective study. Two hundred thirteen of 264 samples matched the IPP criterion. PCa was detected with 89% sensitivity, 51% specificity. Including age and percent free PSA to the proteomic signatures resulted in 91% sensitivity, 69% specificity.  相似文献   

5.
Quantum dot (QD) functionalized graphene sheets (GS) were prepared and used as labels for the preparation of sandwich-type electrochemical immunosensors for the detection of a cancer biomarker (i.e., prostate specific antigen (PSA)). The primary anti-PSA antibody was also immobilized onto the GS. The immunosensor displayed a wide range of linear response (0.005-10 ng/mL), low detection limit (3 pg/mL), and good reproducibility, selectivity and stability. The immunosensor was used to detect PSA in patient serum samples with satisfactory results. Thus, this unique immunosensor may provide many applications in clinical diagnosis.  相似文献   

6.
Racial differences in prostate cancer incidence and mortality have been reported. Several authors hypothesize that African Americans have a more rapid growth rate of prostate cancer compared to Caucasians, that manifests in higher recurrence and lower survival rates in the former group. In this paper we propose a Bayesian piecewise mixture model to characterize PSA progression over time in African Americans and Caucasians, using follow-up serial PSA measurements after surgery. Each individual’s PSA trajectory is hypothesized to have a latent phase immediately following surgery followed by a rapid increase in PSA indicating regrowth of the tumor. The true time of transition from the latent phase to the rapid growth phase is unknown, and can vary across individuals, suggesting a random change point across individuals. Furthermore, some patients may not experience the latent phase due to the cancer having already spread outside the prostate before undergoing surgery. We propose a two-component mixture model to accommodate patients both with and without a latent phase. Within the framework of this mixture model, patients who do not have a latent phase are allowed to have different rates of PSA rise; patients who have a latent phase are allowed to have different PSA trajectories, represented by subject-specific change points and rates of PSA rise before and after the change point. The proposed Bayesian methodology is implemented using Markov Chain Monte Carlo techniques. Model selection is performed using deviance information criteria based on the observed and complete likelihoods. Finally, we illustrate the methods using a prostate cancer dataset.  相似文献   

7.
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.  相似文献   

8.
Breast cancer is the most common cancer among women. In CAD systems, several studies have investigated the use of wavelet transform as a multiresolution analysis tool for texture analysis and could be interpreted as inputs to a classifier. In classification, polynomial classifier has been used due to the advantages of providing only one model for optimal separation of classes and to consider this as the solution of the problem. In this paper, a system is proposed for texture analysis and classification of lesions in mammographic images. Multiresolution analysis features were extracted from the region of interest of a given image. These features were computed based on three different wavelet functions, Daubechies 8, Symlet 8 and bi-orthogonal 3.7. For classification, we used the polynomial classification algorithm to define the mammogram images as normal or abnormal. We also made a comparison with other artificial intelligence algorithms (Decision Tree, SVM, K-NN). A Receiver Operating Characteristics (ROC) curve is used to evaluate the performance of the proposed system. Our system is evaluated using 360 digitized mammograms from DDSM database and the result shows that the algorithm has an area under the ROC curve Az of 0.98 ± 0.03. The performance of the polynomial classifier has proved to be better in comparison to other classification algorithms.  相似文献   

9.
In this paper, we develop a diagnosis model based on particle swarm optimization (PSO), support vector machines (SVMs) and association rules (ARs) to diagnose erythemato-squamous diseases. The proposed model consists of two stages: first, AR is used to select the optimal feature subset from the original feature set; then a PSO based approach for parameter determination of SVM is developed to find the best parameters of kernel function (based on the fact that kernel parameter setting in the SVM training procedure significantly influences the classification accuracy, and PSO is a promising tool for global searching). Experimental results show that the proposed AR_PSO–SVM model achieves 98.91% classification accuracy using 24 features of the erythemato-squamous diseases dataset taken from UCI (University of California at Irvine) machine learning database. Therefore, we can conclude that our proposed method is very promising compared to the previously reported results.  相似文献   

10.
Obstructive sleep apnea (OSA) is a highly prevalent sleep disorder. The traditional diagnosis methods of the disorder are cumbersome and expensive. The ability to automatically identify OSA from electrocardiogram (ECG) recordings is important for clinical diagnosis and treatment. In this study, we proposed an expert system based on discrete wavelet transform (DWT), fast-Fourier transform (FFT) and least squares support vector machine (LS-SVM) for the automatic recognition of patients with OSA from nocturnal ECG recordings. Thirty ECG recordings collected from normal subjects and subjects with sleep apnea, each of approximately 8 h in duration, were used throughout the study. The proposed OSA recognition system comprises three stages. In the first stage, an algorithm based on DWT was used to analyze ECG recordings for the detection of heart rate variability (HRV) and ECG-derived respiration (EDR) changes. In the second stage, an FFT based power spectral density (PSD) method was used for feature extraction from HRV and EDR changes. Then, a hill-climbing feature selection algorithm was used to identify the best features that improve classification performance. In the third stage, the obtained features were used as input patterns of the LS-SVM classifier. Using the cross-validation method, the accuracy of the developed system was found to be 100% for using a subset of selected combination of HRV and EDR features. The results confirmed that the proposed expert system has potential for recognition of patients with suspected OSA by using ECG recordings.  相似文献   

11.
为提高Web数据表识别的准确性,提出一种基于支持向量机与混合核函数的数据表识别方法。给出表格的结构特征、内容特征以及行(列)相似特征,将多项式核函数和线性核函数组成混合核函数,利用其进行Web数据表的自动识别。实验结果表明,该方法在7个站点上,准确率和召回率的平均值为95.14%和95.69%。  相似文献   

12.

Speaker verification (SV) systems involve mainly two individual stages: feature extraction and classification. In this paper, we explore these two modules with the aim of improving the performance of a speaker verification system under noisy conditions. On the one hand, the choice of the most appropriate acoustic features is a crucial factor for performing robust speaker verification. The acoustic parameters used in the proposed system are: Mel Frequency Cepstral Coefficients, their first and second derivatives (Deltas and Delta–Deltas), Bark Frequency Cepstral Coefficients, Perceptual Linear Predictive, and Relative Spectral Transform Perceptual Linear Predictive. In this paper, a complete comparison of different combinations of the previous features is discussed. On the other hand, the major weakness of a conventional support vector machine (SVM) classifier is the use of generic traditional kernel functions to compute the distances among data points. However, the kernel function of an SVM has great influence on its performance. In this work, we propose the combination of two SVM-based classifiers with different kernel functions: linear kernel and Gaussian radial basis function kernel with a logistic regression classifier. The combination is carried out by means of a parallel structure approach, in which different voting rules to take the final decision are considered. Results show that significant improvement in the performance of the SV system is achieved by using the combined features with the combined classifiers either with clean speech or in the presence of noise. Finally, to enhance the system more in noisy environments, the inclusion of the multiband noise removal technique as a preprocessing stage is proposed.

  相似文献   

13.
In this study, a Discriminator Model for Glaucoma Diagnosis (DMGD) using soft computing techniques is presented. As the biomedical images such as fundus images are often acquired in high resolution, the Region of Interest (ROI) for glaucoma diagnosis must be selected at first to reduce the complexity of any system. The DMGD system uses a series of pre-processing; initial cropping by the green channel’s intensity, Spatially Weighted Fuzzy C Means (SWFCM), blood vessel detection and removal by Gaussian Derivative Filters (GDF) and inpainting algorithms. Once the ROI has been selected, the numerical features such as colour, spatial domain features from Local Binary Pattern (LBP) and frequency domain features from LAWS are generated from the corresponding ROI for further classification using kernel based Support Vector Machine (SVM). The DMGD system performances are validated using four fundus image databases; ORIGA, RIM-ONE, DRISHTI-GS1, and HRF with four different kernels; Linear Kernel (LK), Polynomial Kernel (PK), Radial Basis Function (RBFK) kernel, Quadratic Kernel (QK) based SVM classifiers. Results show that the DMGD system classifies the fundus images accurately using the multiple features and kernel based classifies from the properly segmented ROI.  相似文献   

14.
基于支持向量机分类问题的勒让德核函数   总被引:1,自引:1,他引:0  
基于勒让德正交多项式,提出了一类新的核函数——勒让德核函数。在双螺旋集和标准UCI数据集上的实验表明,在鲁棒性与泛化性能方面,该核函数比常用的核函数(多项式核、高斯径向基核等)具有更好的表现,而且其参数仅在自然数中取值,能大大缩短参数优化时间。  相似文献   

15.
This paper considers the problem of determining the solution set of polynomial systems, a well‐known problem in control system analysis and design. A novel approach is developed as a viable alternative to the commonly employed algebraic geometry and homotopy methods. The first result of the paper shows that the solution set of the polynomial system belongs to the kernel of a suitable symmetric matrix. Such a matrix is obtained via the solution of a linear matrix inequality (LMI) involving the maximization of the minimum eigenvalue of an affine family of symmetric matrices. The second result concerns the computation of the solution set from the kernel of the obtained matrix. For polynomial systems of degree m in n variables, a basic procedure is available if the kernel dimension does not exceed m+1, while an extended procedure can be applied if the kernel dimension is less than n(m?1)+2. Finally, some application examples are illustrated to show the features of the approach and to make a brief comparison with polynomial resultant techniques. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
The task of breast density quantification is becoming increasingly relevant due to its association with breast cancer risk. In this work, a semi-automated and a fully automated tools to assess breast density from full-field digitized mammograms are presented. The first tool is based on a supervised interactive thresholding procedure for segmenting dense from fatty tissue and is used with a twofold goal: for assessing mammographic density (MD) in a more objective and accurate way than via visual-based methods and for labeling the mammograms that are later employed to train the fully automated tool. Although most automated methods rely on supervised approaches based on a global labeling of the mammogram, the proposed method relies on pixel-level labeling, allowing better tissue classification and density measurement on a continuous scale. The fully automated method presented combines a classification scheme based on local features and thresholding operations that improve the performance of the classifier. A dataset of 655 mammograms was used to test the concordance of both approaches in measuring MD. Three expert radiologists measured MD in each of the mammograms using the semi-automated tool (DM-Scan). It was then measured by the fully automated system and the correlation between both methods was computed. The relation between MD and breast cancer was then analyzed using a case–control dataset consisting of 230 mammograms. The Intraclass Correlation Coefficient (ICC) was used to compute reliability among raters and between techniques. The results obtained showed an average ICC = 0.922 among raters when using the semi-automated tool, whilst the average correlation between the semi-automated and automated measures was ICC = 0.838. In the case–control study, the results obtained showed Odds Ratios (OR) of 1.38 and 1.50 per 10% increase in MD when using the semi-automated and fully automated approaches respectively. It can therefore be concluded that the automated and semi-automated MD assessments present a good correlation. Both the methods also found an association between MD and breast cancer risk, which warrants the proposed tools for breast cancer risk prediction and clinical decision making. A full version of the DM-Scan is freely available.  相似文献   

17.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

18.
Kernel-based methods are effective for object detection and recognition. However, the computational cost when using kernel functions is high, except when using linear kernels. To realize fast and robust recognition, we apply normalized linear kernels to local regions of a recognition target, and the kernel outputs are integrated by summation. This kernel is referred to as a local normalized linear summation kernel. Here, we show that kernel-based methods that employ local normalized linear summation kernels can be computed by a linear kernel of local normalized features. Thus, the computational cost of the kernel is nearly the same as that of a linear kernel and much lower than that of radial basis function (RBF) and polynomial kernels. The effectiveness of the proposed method is evaluated in face detection and recognition problems, and we confirm that our kernel provides higher accuracy with lower computational cost than RBF and polynomial kernels. In addition, our kernel is also robust to partial occlusion and shadows on faces since it is based on the summation of local kernels.  相似文献   

19.
A method is presented for detecting blurred edges in images and for estimating the following edge parameters: position, orientation, amplitude, mean value, and edge slope. The method is based on a local image decomposition technique called a polynomial transform. The information that is made explicit by the polynomial transform is well suited to detect image features, such as edges, and to estimate feature parameters. By using the relationship between the polynomial coefficients of a blurred feature and those of the a priori assumed (unblurred) feature in the scene, the parameters of the blurred feature can be estimated. The performance of the proposed edge parameter estimation method in the presence of image noise has been analyzed. An algorithm is presented for estimating the spread of a position-invariant Gaussian blurring kernel, using estimates at different edge locations over the image. First a single-scale algorithm is developed in which one polynomial transform is used. A critical parameter of the single-scale algorithm is the window size, which has to be chosen a priori. Since the reliability of the estimate for the spread of the blurring kernel depends on the ratio of this spread to the window size, it is difficult to choose a window of appropriate size a priori. The problem is overcome by a multiscale blur estimation algorithm where several polynomial transforms at different scales are applied, and the appropriate scale for analysis is chosen a posteriori. By applying the blur estimation algorithm to natural and synthetic images with different amounts of blur and noise, it is shown that the algorithm gives reliable estimates for the spread of the blurring kernel even at low signal-to-noise ratios.  相似文献   

20.
Xiao  Yueyue  Huang  Wei  Oh  Sung-Kwun  Zhu  Liehuang 《Applied Intelligence》2022,52(6):6398-6412

In this paper, we propose a polynomial kernel neural network classifier (PKNNC) based on the random sampling and information gain. Random sampling is used here to generate datasets for the construction of polynomial neurons located in the neural networks, while information gain is used to evaluate the importance of the input variables (viz. dataset features) of each neuron. Both random sampling and information gain stem from the concepts of well-known random forest models. Some traditional neural networks have certain limitations, such as slow convergence speed, easily falling to local optima and difficulty describing the polynomial relation between the input and output. In this regard, a general PKNNC is proposed, and it consists of three parts: the premise, conclusion, and aggregation. The method of designing the PKNNC is summarized as follows. In the premise section, random sampling and information gain are used to obtain multiple subdatasets that are passed to the aggregation part, and the conclusion part uses three types of polynomials. In the aggregation part, the least squares method (LSM) is used to estimate the parameters of polynomials. Furthermore, the particle swarm optimization (PSO) algorithm is exploited here to optimize the PKNNC. The overall optimization of the PKNNC combines structure optimization and parameter optimization. The PKNNC takes advantage of three types of polynomial kernel functions, random sampling techniques and information gain algorithms, which have a good ability to describe the higher-order nonlinear relationships between input and output variables and have high generalization and fast convergence capabilities. To evaluate the effectiveness of the PKNNC, numerical experiments are carried out on two types of data: machine learning data and face data. A comparative study illustrates that the proposed PKNNC leads to better performance than several conventional models.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号