首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目的 深度置信网络能够从数据中自动学习、提取特征,在特征学习方面具有突出优势。极化SAR图像分类中存在海量特征利用率低、特征选取主观性强的问题。为了解决这一问题,提出一种基于深度置信网络的极化SAR图像分类方法。方法 首先进行海量分类特征提取,获得极化类、辐射类、空间类和子孔径类四类特征构成的特征集;然后在特征集基础上选取样本并构建特征矢量,用以输入到深度置信网络模型之中;最后利用深度置信网络的方法对海量分类特征进行逐层学习抽象,获得有效的分类特征进行分类。结果 采用AIRSAR数据进行实验,分类结果精度达到91.06%。通过与经典Wishart监督分类、逻辑回归分类方法对比,表现了深度置信网络方法在特征学习方面的突出优势,验证了方法的适用性。结论 针对极化SAR图像海量特征的选取与利用,提出了一种新的分类方法,为极化SAR图像分类提供了一种新思路,为深度置信网络获得更广泛地应用进行有益的探索和尝试。  相似文献   

2.
Piecewise linear optimization is one of the most frequently used optimization models in practice, such as transportation, finance and supply-chain management. In this paper, we investigate a particular piecewise linear optimization that is optimizing the norm of piecewise linear functions (NPLF). Specifically, we are interested in solving a class of Brugnano–Casulli piecewise linear systems (PLS), which can be reformulated as an NPLF problem. Speaking generally, the NPLF is considered as an optimization problem with a nonsmooth, nonconvex objective function. A new and efficient optimization approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) is developed. With a suitable DC formulation, we design a DCA scheme, named ℓ1-DCA, for the problem of optimizing the ℓ1-norm of NPLF. Thanks to particular properties of the problem, we prove that under some conditions, our proposed algorithm converges to an exact solution after a finite number of iterations. In addition, when a nonglobal solution is found, a numerical procedure is introduced to find a feasible point having a smaller objective value and to restart ℓ1-DCA at this point. Several numerical experiments illustrate these interesting convergence properties. Moreover, we also present an application to the free-surface hydrodynamic problem, where the correct numerical modeling often requires to have the solution of special PLS, with the aim of showing the efficiency of the proposed method.  相似文献   

3.
针对传统的构音障碍诊断方法存在耗时高、成本高等问题,提出一种构音障碍语音的计算机自动识别方法。结合Gammatone频率倒谱系数(Gammatone Frequency Cepstrum Coefficients, GFCC)与常用声学特征形成组合声学特征,应用差分演化算法进行特征选择,并使用逻辑回归分类器对构音障碍语音进行识别。将Torgo构音障碍语音数据库分成3个语音子集,分别是非词、短词语、限制句子集,提取24维GFCC和37维常用的声学特征构成组合声学特征,最后使用差分演化算法和逻辑回归分类器进行分类识别。实验表明:使用差分演化算法可以有效选择出具有更佳识别能力的特征,从而显著提高构音障碍识别率。在非词子集上的实验准确率达到98.18%,召回率为98.3%,精确率为98.3%。  相似文献   

4.
针对现有心音定位分割方法精度有限的难题,提出了一种对心率变异性较低的信号建模分割方法。首先,通过集合经验模态分解(Ensemble empirical mode decomposition,EEMD)使用有效的本征模态函数(Intrinsic mode function,IMF)分量来表征心音信号,提高心音信号的可分析性;然后,通过基础心音与非基础心音间的高斯约束关系建立高斯混合模型(Gaussian mixture model,GMM);接着,优化隐马尔可夫模型(Hidden Markov model, HMM)并建立基于时间相关性的隐马尔可夫模型(Duration-dependent hidden Markov model,DHMM),更简洁地描述分割模型,降低算法复杂度;最后,通过时域特征区分出s1,收缩期,s2和舒张期。将本文算法与经典Hilbert算法和逻辑回归的隐半马尔科夫模型(Logistic regression hidden semi-Markov model,LRHSMM)算法进行了对比,实验结果表明,本文算法的检出正确率和运算耗时等评价指标更优。  相似文献   

5.
Lan  Qiujun  Jiang  Shan 《Applied Intelligence》2021,51(10):6859-6880

Missing data is a common problem in credit evaluation practice and can obstruct the development and application of an evaluation model. Block-wise missing data is a particularly troublesome issue. Based on multi-task feature selection approach, this paper proposes a method called MMPFS to build a model for credit evaluation that primarily includes two steps: (1) dividing the dataset into several nonoverlapping subsets based on missing patterns, and (2) integrating the multi-task feature selection approach using logistic regression to perform joint feature learning on all subsets. The proposed method has the following advantages: (1) missing data do not need to be managed in advance, (2) available data can be fully used for model learning, (3) information loss or bias caused by general missing data processing methods can be avoided, and (4) overfitting risk caused by redundant features can be reduced. The implementation framework and algorithm principle of the proposed method are described, and three credit datasets from UCI are investigated to compare the proposed method with other commonly used missing data treatments. The results show that MMPFS can produce a better credit evaluation model than data preprocessing methods, such as sample deletion and data imputation.

  相似文献   

6.
Rekha  D.  Sangeetha  J.  Ramaswamy  V. 《The Journal of supercomputing》2022,78(2):2580-2596

The selection of text features is a fundamental task and plays an important role in digital document analysis. Conventional methods in text feature extraction necessitate indigenous features. Obtaining an efficient feature is an extensive process, but a new and real-time representation of features in text data is a challenging task. Deep learning is making inroads in digital document mining. A significant distinction between deep learning and traditional methods is that deep learning learns features in a digital document in an automatic manner. In this paper, logistic regression and deep dependency parsing (LR-DDP) methods are proposed. The logistic regression token generation model generates robust tokens by means of Napierian grammar. With the robust generated tokens, a deep transition-based dependency parsing using duplex long short-term memory is designed. Experimental results demonstrate that our dependency parser achieves comparable performance in terms of digital document parsing accuracy, parsing time and overhead when compared to existing methods. Hence, these methods are found to be computationally efficient and accurate.

  相似文献   

7.

In this paper, we propose a new feature selection method called kernel fisher discriminant analysis and regression learning based algorithm for unsupervised feature selection. The existing feature selection methods are based on either manifold learning or discriminative techniques, each of which has some shortcomings. Although some studies show the advantages of two-steps method benefiting from both manifold learning and discriminative techniques, a joint formulation has been shown to be more efficient. To do so, we construct a global discriminant objective term of a clustering framework based on the kernel method. We add another term of regression learning into the objective function, which can impose the optimization to select a low-dimensional representation of the original dataset. We use L2,1-norm of the features to impose a sparse structure upon features, which can result in more discriminative features. We propose an algorithm to solve the optimization problem introduced in this paper. We further discuss convergence, parameter sensitivity, computational complexity, as well as the clustering and classification accuracy of the proposed algorithm. In order to demonstrate the effectiveness of the proposed algorithm, we perform a set of experiments with different available datasets. The results obtained by the proposed algorithm are compared against the state-of-the-art algorithms. These results show that our method outperforms the existing state-of-the-art methods in many cases on different datasets, but the improved performance comes with the cost of increased time complexity.

  相似文献   

8.
The explosive development of computational tools these days is threatening security of cryptographic algorithms, which are regarded as primary traditional methods for ensuring information security. The physical layer security approach is introduced as a method for both improving confidentiality of the secret key distribution in cryptography and enabling the data transmission without relaying on higher-layer encryption. In this paper, the cooperative jamming paradigm - one of the techniques used in the physical layer is studied and the resulting power allocation problem with the aim of maximizing the sum of secrecy rates subject to power constraints is formulated as a nonconvex optimization problem. The objective function is a so-called DC (Difference of Convex functions) function, and some constraints are coupling. We propose a new DC formulation and develop an efficient DCA (DC Algorithm) to deal with this nonconvex program. The DCA introduces the elegant concept of approximating the original nonconvex program by a sequence of convex ones: at each iteration of DCA requires solution of a convex subproblem. The main advantage of the proposed approach is that it leads to strongly convex quadratic subproblems with separate variables in the objective function, which can be tackled by both distributed and centralized methods. One of the major contributions of the paper is to develop a highly efficient distributed algorithm to solve the convex subproblem. We adopt the dual decomposition method that results in computing iteratively the projection of points onto a very simple structural set which can be determined by an inexpensive procedure. The numerical results show the efficiency and the superiority of the new DCA based algorithm compared with existing approaches.  相似文献   

9.
由于LTE网络数据量庞大而且种类繁多,人工路测分析已经无法满足当今对基于路测数据质差小区检测的需求.为了提高质差小区检测的效率与正确率,机器学习逐渐在质差小区检测中得到了应用.本文针对小区数量较少的路测数据,提出了一种基于距离的四维特征的质差小区检测方法.该方法采用聚类算法和人工判断相结合的方式对路测数据进行标定,对比分析了基于距离的四维特征和传统的两维特征的提取效果,并在逻辑回归分类器、决策树分类器、支持向量机分类器和k近邻分类器这4种分类器中进行分类.实验结果表明,基于距离的四维特征比传统的二维特征更有利于质差小区检测;使用四维特征进行分类,支持向量机分类器的效果最好.  相似文献   

10.
This paper proposed a novel feature selection method that includes a self-representation loss function, a graph regularization term and an \({l_{2,1}}\)-norm regularization term. Different from traditional least square loss function which focuses on achieving the minimal regression error between the class labels and their corresponding predictions, the proposed self-representation loss function pushes to represent each feature with a linear combination of its relevant features, aim at effectively selecting representative features and ensuring the robustness to outliers. The graph regularization terms include two kinds of inherent information, i.e., the relationship between samples (the sample–sample relation for short) and the relationship between features (the feature–feature relation for short). The feature–feature relation reflects the similarity between two features and preserves the relation into the coefficient matrix, while the sample–sample relation reflects the similarity between two samples and preserves the relation into the coefficient matrix. The \({l_{2,1}}\)-norm regularization term is used to conduct feature selection, aim at selecting the features, which satisfies the characteristics mentioned above. Furthermore, we put forward a new optimization method to solve our objective function. Finally, we feed reduced data into support vector machine (SVM) to conduct classification on real datasets. The experimental results showed that the proposed method has a better performance comparing with state-of-the-art methods, such as k nearest neighbor, ridge regression, SVM and so on.  相似文献   

11.

Classical linear discriminant analysis (LDA) has been applied to machine learning and pattern recognition successfully, and many variants based on LDA are proposed. However, the traditional LDA has several disadvantages as follows: Firstly, since the features selected by feature selection have good interpretability, LDA has poor performance in feature selection. Secondly, there are many redundant features or noisy data in the original data, but LDA has poor robustness to noisy data and outliers. Lastly, LDA only utilizes the global discriminant information, without consideration for the local discriminant structure. In order to overcome the above problems, we present a robust sparse manifold discriminant analysis (RSMDA) method. In RSMDA, by introducing the L2,1 norm, the most discriminant features can be selected for discriminant analysis. Meanwhile, the local manifold structure is used to capture the local discriminant information of the original data. Due to the introduction of L2,1 constraints and local discriminant information, the proposed method has excellent robustness to noisy data and has the potential to perform better than other methods. A large number of experiments on different data sets have proved the good effectiveness of RSMDA.

  相似文献   

12.
The next generation broadband wireless networks deploys OFDM/OFDMA as the enabling technologies for broadband data transmission with QoS capabilities. Many optimization problems have arisen in the conception of such a network. This article studies an optimization problem in resource allocation. By using mathematical modeling technique we formulate the considered problem as a pure integer linear program. This problem is reformulated as a DC (Difference of Convex functions) program via an exact penalty technique. We then propose a continuous approach for its resolution. Our approach is based on DC programming and DCA (DC Algorithm). It works in a continuous domain, but provides integer solutions. To check globality of computed solutions, a global method combining DCA with a well adapted Branch-and-Bound (B&B) algorithm is investigated. Preliminary numerical results are reported to show the efficiency of the proposed method with respect to the standard Branch-and-Bound algorithm.  相似文献   

13.
A new sparse kernel probability density function (pdf) estimator based on zero-norm constraint is constructed using the classical Parzen window (PW) estimate as the target function. The so-called zero-norm of the parameters is used in order to achieve enhanced model sparsity, and it is suggested to minimize an approximate function of the zero-norm. It is shown that under certain condition, the kernel weights of the proposed pdf estimator based on the zero-norm approximation can be updated using the multiplicative nonnegative quadratic programming algorithm. Numerical examples are employed to demonstrate the efficacy of the proposed approach.  相似文献   

14.
Two previously proposed heuristic algorithms for solving penalized regression‐based clustering model (PRClust) are (a) an algorithm that combines the difference‐of‐convex programming with a coordinate‐wise descent (DC‐CD) algorithm and (b) an algorithm that combines DC with the alternating direction method of multipliers (DC‐ADMM). In this paper, a faster method is proposed for solving PRClust. DC‐CD uses p × n × (n ? 1)/2 slack variables to solve PRClust, where n is the number of data and p is the number of their features. In each iteration of DC‐CD, these slack variable and cluster centres are updated using a second‐order cone programming (SOCP). DC‐ADMM uses p × n × (n ? 1) slack variables. In each iteration of DC‐ADMM, these slack variables and cluster centres are updated using ADMM. In this paper, PRClust is reformulated into an equivalent model to be solved using alternating optimization. Our proposed algorithm needs only n × (n ? 1)/2 slack variables, which is much less than that of DC‐CD and DC‐ADMM and updates them analytically using a simple equation in each iteration of the algorithm. Our proposed algorithm updates only cluster centres using an SOCP. Therefore, our proposed SOCP is much smaller than that of DC‐CD, which is used to update both cluster centres and slack variables. Experimental results on real datasets confirm that our proposed method is faster and much faster than DC‐ADMM and DC‐CD, respectively.  相似文献   

15.
The purpose of this paper is to develop new efficient approaches based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) to perform clustering via minimum sum-of-squares Euclidean distance. We consider the two most widely used models for the so-called Minimum Sum-of-Squares Clustering (MSSC in short) that are a bilevel programming problem and a mixed integer program. Firstly, the mixed integer formulation of MSSC is carefully studied and is reformulated as a continuous optimization problem via a new result on exact penalty technique in DC programming. DCA is then investigated to the resulting problem. Secondly, we introduce a Gaussian kernel version of the bilevel programming formulation of MSSC, named GKMSSC. The GKMSSC problem is formulated as a DC program for which a simple and efficient DCA scheme is developed. A regularization technique is investigated for exploiting the nice effect of DC decomposition and a simple procedure for finding good starting points of DCA is developed. The proposed DCA schemes are original and very inexpensive because they amount to computing, at each iteration, the projection of points onto a simplex and/or onto a ball, and/or onto a box, which are all determined in the explicit form. Numerical results on real word datasets show the efficiency, the scalability of DCA and its great superiority with respect to k-means and kernel k-means, standard methods for clustering.  相似文献   

16.
For solving a class of ?2- ?0- regularized problems we convexify the nonconvex ?2- ?0 term with the help of its biconjugate function. The resulting convex program is explicitly given which possesses a very simple structure and can be handled by convex optimization tools and standard softwares. Furthermore, to exploit simultaneously the advantage of convex and nonconvex approximation approaches, we propose a two phases algorithm in which the convex relaxation is used for the first phase and in the second phase an efficient DCA (Difference of Convex functions Algorithm) based algorithm is performed from the solution given by Phase 1. Applications in the context of feature selection in support vector machine learning are presented with experiments on several synthetic and real-world datasets. Comparative numerical results with standard algorithms show the efficiency the potential of the proposed approaches.  相似文献   

17.
Material selection is a very important issue for an electronics company as it includes many qualitative or quantification factors. The material selection problem is associated with design and manufacturing problems which have been widely investigated. This study develops a hybrid fuzzy decision-making model which combines the fuzzy weight average (FWA) with the fuzzy inference system (FIS) for material substitution selection in the electronics industry. FWA is employed to select a substitute material in an uncertain environment, while FIS is used for reasoning purposes. FWA with α-cuts arithmetic (FWAα-cut) is a popularly technology in decision-making problems. However, FWAα-cut may result in the following unanticipated situations: (1) unclear decision situations; (2) undecided results expressed by fuzzy membership functions; and (3) high computational complexity. Therefore, a fuzzy weight average with the weakest t-norm (FWA) is designed as an alternative method for group decision making. In contrast to traditional FWA methods, FWA obtains more visible fuzzy results for decision makers with lower computational complexity, and can provide exacter estimation by the weakest t-norm operations in uncertain environment. Thus, the proposed hybrid fuzzy decision-making model imitates an expert’s experiences and can estimate substitution purchasing in various statuses. A real material substitution selection case is employed to examine the feasibility of the proposed model; experimental results reveal that the proposed model performs better than the traditional FWA model in coping with material substitution selection problems.  相似文献   

18.
We offer an efficient approach based on difference of convex functions (DC) optimization for self-organizing maps (SOM). We consider SOM as an optimization problem with a nonsmooth, nonconvex energy function and investigated DC programming and DC algorithm (DCA), an innovative approach in nonconvex optimization framework to effectively solve this problem. Furthermore an appropriate training version of this algorithm is proposed. The numerical results on many real-world datasets show the efficiency of the proposed DCA based algorithms on both quality of solutions and topographic maps.  相似文献   

19.
The aim of this paper is to propose a new hybrid data mining model based on combination of various feature selection and ensemble learning classification algorithms, in order to support decision making process. The model is built through several stages. In the first stage, initial dataset is preprocessed and apart of applying different preprocessing techniques, we paid a great attention to the feature selection. Five different feature selection algorithms were applied and their results, based on ROC and accuracy measures of logistic regression algorithm, were combined based on different voting types. We also proposed a new voting method, called if_any, that outperformed all other voting methods, as well as a single feature selection algorithm's results. In the next stage, a four different classification algorithms, including generalized linear model, support vector machine, naive Bayes and decision tree, were performed based on dataset obtained in the feature selection process. These classifiers were combined in eight different ensemble models using soft voting method. Using the real dataset, the experimental results show that hybrid model that is based on features selected by if_any voting method and ensemble GLM + DT model performs the highest performance and outperforms all other ensemble and single classifier models.  相似文献   

20.
目的 为了得到精确的显著对象分割结果,基于深度学习的方法大多引入注意力机制进行特征加权,以抑制噪声和冗余信息,但是对注意力机制的建模过程粗糙,并将所有特征均等处理,无法显式学习不同通道以及不同空间区域的全局重要性。为此,本文提出一种基于深度聚类注意力机制(deep cluster attention,DCA)的显著对象检测算法DCANet (DCA network),以更好地建模特征级别的像素上下文关联。方法 DCA显式地将特征图分别在通道和空间上进行区域划分,即将特征聚类分为前景敏感区和背景敏感区。然后在类内执行一般性的逐像素注意力加权,并在类间进一步执行语义级注意力加权。DCA的思想清晰易懂,参数量少,可以便捷地部署到任意显著性检测网络中。结果 在6个数据集上与19种方法的对比实验验证了DCA对得到精细显著对象分割掩码的有效性。在各项评价指标上,部署DCA之后的模型效果都得到了提升。在ECSSD (extended cornplex scene saliency dataset)数据集上,DCANet的性能比第2名在F值上提升了0.9%;在DUT-OMRON (Dalian University of Technology and OMRON Corporation)数据集中,DCANet的性能比第2名在F值上提升了0.5%,平均绝对误差(mean absolute error,MAE)降低了3.2%;在HKU-IS数据集上,DCANet的性能比第2名在F值上提升了0.3%, MAE降低了2.8%;在PASCAL (pattern analysis,statistical modeling and computational learning)-S (subset)数据集上,DCANet的性能则比第2名在F值上提升了0.8%,MAE降低了4.2%。结论 本文提出的深度聚类注意力机制通过细粒度的通道划分和空间区域划分,有效地增强了前景敏感类的全局显著得分。与现有的注意力机制相比,DCA思想清晰、效果明显、部署简单,同时也为一般性的注意力机制研究提供了新的可行的研究方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号