Neural Processing Letters - Graph convolutional networks (GCNs), which rely on graph structures to aggregate information of neighbors to output robust node embeddings, have been becoming a popular... 相似文献
Neural Processing Letters - In the process of graph clustering, the quality requirements for the structure of data graph are very strict, which will directly affect the final clustering results.... 相似文献
Dimensionality reduction has been attracted extensive attention in machine learning. It usually includes two types: feature selection and subspace learning. Previously, many researchers have demonstrated that the dimensionality reduction is meaningful for real applications. Unfortunately, a large mass of these works utilize the feature selection and subspace learning independently. This paper explores a novel supervised feature selection algorithm by considering the subspace learning. Specifically, this paper employs an ?2,1?norm and an ?2,p?norm regularizers, respectively, to conduct sample denoising and feature selection via exploring the correlation structure of data. Then this paper uses two constraints (i.e. hypergraph and low-rank) to consider the local structure and the global structure among the data, respectively. Finally, this paper uses the optimizing framework to iteratively optimize each parameter while fixing the other parameter until the algorithm converges. A lot of experiments show that our new supervised feature selection method can get great results on the eighteen public data sets. 相似文献
Neural Processing Letters - Feature selection is a useful and important process, which has a widely use in high-dimensional data processing and artificial intelligence. Its goal is to select a... 相似文献
The characteristics of non-linear, low-rank, and feature redundancy often appear in high-dimensional data, which have great trouble for further research. Therefore, a low-rank unsupervised feature selection algorithm based on kernel function is proposed. Firstly, each feature is projected into the high-dimensional kernel space by the kernel function to solve the problem of linear inseparability in the low-dimensional space. At the same time, the self-expression form is introduced into the deviation term and the coefficient matrix is processed with low rank and sparsity. Finally, the sparse regularization factor of the coefficient vector of the kernel matrix is introduced to implement feature selection. In this algorithm, kernel matrix is used to solve linear inseparability, low rank constraints to consider the global information of the data, and self-representation form determines the importance of features. Experiments show that comparing with other algorithms, the classification after feature selection using this algorithm can achieve good results.
Neural Processing Letters - Recently, massive multimedia data (especially images) is moved to the cloud environment for analysis and retrieval, which makes data security issue become particularly... 相似文献