首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对群智感知平台中的任务分配问题,提出了一种任务需求特征提取算法和用户标签分类方法相结合的T REA U LCM任务分配模型.首先,通过任务需求特征提取算法提取感知任务的类别关键词;然后,通过多线性神经网络和多核学习对数据集进行训练得到分类器,通过分类器对用户的类型标签进行预测;最后,根据任务的类别关键词结合空间位置信息和用户参与度筛选有该任务类别标签且最大化满足任务需求的用户分发任务.仿真结果表明,T REA U LCM任务分配模型在任务匹配度和任务分配效率方面有较好的可行性.  相似文献   

2.
Named entity disambiguation (NED) is the task of linking mentions of ambiguous entities to their referenced entities in a knowledge base such as Wikipedia. We propose an approach to effectively disentangle the discriminative features in the manner of collaborative utilization of collective wisdom (via human-labeled crowd labels) and deep learning (via human-generated data) for the NED task. In particular, we devise a crowd model to elicit the underlying features (crowd features) from crowd labels that indicate a matching candidate for each mention, and then use the crowd features to fine-tune a dynamic convolutional neural network (DCNN). The learned DCNN is employed to obtain deep crowd features to enhance traditional hand-crafted features for the NED task. The proposed method substantially benefits from the utilization of crowd knowledge (via crowd labels) into a generic deep learning for the NED task. Experimental analysis demonstrates that the proposed approach is superior to the traditional hand-crafted features when enough crowd labels are gathered.  相似文献   

3.
In this paper, we propose a general learning framework based on local and global regularization. In the local regularization part, our algorithm constructs a regularized classifier for each data point using its neighborhood, while the global regularization part adopts a Laplacian regularizer to smooth the data labels predicted by those local classifiers. We show that such a learning framework can easily be incorporated into either unsupervised learning, semi-supervised learning, and supervised learning paradigm. Moreover, many existing learning algorithms can be derived from our framework. Finally we present some experimental results to show the effectiveness of our method.  相似文献   

4.
In this paper, we propose a novel framework for multi-label classification, which directly models the dependencies among labels using a Bayesian network. Each node of the Bayesian network represents a label, and the links and conditional probabilities capture the probabilistic dependencies among multiple labels. We employ our Bayesian network structure learning method, which guarantees to find the global optimum structure, independent of the initial structure. After structure learning, maximum likelihood estimation is used to learn the conditional probabilities among nodes. Any current multi-label classifier can be employed to obtain the measurements of labels. Then, using the learned Bayesian network, the true labels are inferred by combining the relationship among labels with the labels? estimates obtained from a current multi-labeling method. We further extend the proposed multi-label classification method to deal with incomplete label assignments. Structural Expectation-Maximization algorithm is adopted for both structure and parameter learning. Experimental results on two benchmark multi-label databases show that our approach can effectively capture the co-occurrent and the mutual exclusive relation among labels. The relation modeled by our approach is more flexible than the pairwise or fixed subset labels captured by current multi-label learning methods. Thus, our approach improves the performance over current multi-label classifiers. Furthermore, our approach demonstrates its robustness to incomplete multi-label classification.  相似文献   

5.
In many domains when we have several competing classifiers available we want to synthesize them or some of them to get a more accurate classifier by a combination function. In this paper we propose a ‘class-indifferent’ method for combining classifier decisions represented by evidential structures called triplet and quartet, using Dempster's rule of combination. This method is unique in that it distinguishes important elements from the trivial ones in representing classifier decisions, makes use of more information than others in calculating the support for class labels and provides a practical way to apply the theoretically appealing Dempster-Shafer theory of evidence to the problem of ensemble learning. We present a formalism for modelling classifier decisions as triplet mass functions and we establish a range of formulae for combining these mass functions in order to arrive at a consensus decision. In addition we carry out a comparative study with the alternatives of simplet and dichotomous structure and also compare two combination methods, Dempster's rule and majority voting, over the UCI benchmark data, to demonstrate the advantage our approach offers.  相似文献   

6.
Fuzzy memberships can be understood as coverage functions of random sets. This interpretation makes sense in the context of fuzzy rule learning: a random-sets-based semantic of the linguistic labels is compatible with the use of fuzzy statistics for obtaining knowledge bases from data. In particular, in this paper we formulate the learning of a fuzzy-rule-based classifier as a problem of statistical inference. We propose to learn rules by maximizing the likelihood of the classifier. Furthermore, we have extended this methodology to interval-censored data, and propose to use upper and lower bounds of the likelihood to evolve rule bases. Combining descent algorithms and a co-evolutionary scheme, we are able to obtain rule-based classifiers from imprecise data sets, and can also identify the conflictive instances in the training set: those that contribute the most to the indetermination of the likelihood of the model.  相似文献   

7.
The comprehensive utilization of incomplete multi-modality data is a difficult problem with strong practical value. Most of the previous multimodal learning algorithms require massive training data with complete modalities and annotated labels, which greatly limits their practicality. Although some existing algorithms can be used to complete the data imputation task, they still have two disadvantages: (1) they cannot control the semantics of the imputed modalities accurately; and (2) they need to establish multiple independent converters between any two modalities when extended to multimodal cases. To overcome these limitations, we propose a novel doubly semi-supervised multimodal learning (DSML) framework. Specifically, DSML uses a modality-shared latent space and multiple modality-specific generators to associate multiple modalities together. Here we divided the shared latent space into two independent parts, the semantic labels and the semantic-free styles, which allows us to easily control the semantics of generated samples. In addition, each modality has its own separate encoder and classifier to infer the corresponding semantic and semantic-free latent variables. The above DSML framework can be adversarially trained by using our specially designed softmax-based discriminators. Large amounts of experimental results show that the DSML obtains better performance than the baselines on three tasks, including semi-supervised classification, missing modality imputation and cross-modality retrieval.  相似文献   

8.
Recently, large scale image annotation datasets have been collected with millions of images and thousands of possible annotations. Latent variable models, or embedding methods, that simultaneously learn semantic representations of object labels and image representations can provide tractable solutions on such tasks. In this work, we are interested in jointly learning representations both for the objects in an image, and the parts of those objects, because such deeper semantic representations could bring a leap forward in image retrieval or browsing. Despite the size of these datasets, the amount of annotated data for objects and parts can be costly and may not be available. In this paper, we propose to bypass this cost with a method able to learn to jointly label objects and parts without requiring exhaustively labeled data. We design a model architecture that can be trained under a proxy supervision obtained by combining standard image annotation (from ImageNet) with semantic part-based within-label relations (from WordNet). The model itself is designed to model both object image to object label similarities, and object label to object part label similarities in a single joint system. Experiments conducted on our combined data and a precisely annotated evaluation set demonstrate the usefulness of our approach.  相似文献   

9.
谭桥宇  余国先  王峻  郭茂祖 《软件学报》2017,28(11):2851-2864
弱标记学习是多标记学习的一个重要分支,近几年已被广泛研究并被应用于多标记样本的缺失标记补全和预测等问题.然而,针对特征集合较大、更容易拥有多个语义标记和出现标记缺失的高维数据问题,现有弱标记学习方法普遍易受这类数据包含的噪声和冗余特征的干扰.为了对高维多标记数据进行准确的分类,提出了一种基于标记与特征依赖最大化的弱标记集成分类方法EnWL.EnWL首先在高维数据的特征空间多次利用近邻传播聚类方法,每次选择聚类中心构成具有代表性的特征子集,降低噪声和冗余特征的干扰;再在每个特征子集上训练一个基于标记与特征依赖最大化的半监督多标记分类器;最后,通过投票集成这些分类器实现多标记分类.在多种高维数据集上的实验结果表明,EnWL在多种评价度量上的预测性能均优于已有相关方法.  相似文献   

10.
Many applications are facing the problem of learning from multiple information sources, where sources may be labeled or unlabeled, and information from multiple information sources may be beneficial but cannot be integrated into a single information source for learning. In this paper, we propose an ensemble learning method for different labeled and unlabeled sources. We first present two label propagation methods to infer the labels of training objects from unlabeled sources by making a full use of class label information from labeled sources and internal structure information from unlabeled sources, which are processes referred to as global consensus and local consensus, respectively. We then predict the labels of testing objects using the ensemble learning model of multiple information sources. Experimental results show that our method outperforms two baseline methods. Meanwhile, our method is more scalable for large information sources and is more robust for labeled sources with noisy data.  相似文献   

11.
Deep learning techniques for Sentiment Analysis have become very popular. They provide automatic feature extraction and both richer representation capabilities and better performance than traditional feature based techniques (i.e., surface methods). Traditional surface approaches are based on complex manually extracted features, and this extraction process is a fundamental question in feature driven methods. These long-established approaches can yield strong baselines, and their predictive capabilities can be used in conjunction with the arising deep learning methods. In this paper we seek to improve the performance of deep learning techniques integrating them with traditional surface approaches based on manually extracted features. The contributions of this paper are sixfold. First, we develop a deep learning based sentiment classifier using a word embeddings model and a linear machine learning algorithm. This classifier serves as a baseline to compare to subsequent results. Second, we propose two ensemble techniques which aggregate our baseline classifier with other surface classifiers widely used in Sentiment Analysis. Third, we also propose two models for combining both surface and deep features to merge information from several sources. Fourth, we introduce a taxonomy for classifying the different models found in the literature, as well as the ones we propose. Fifth, we conduct several experiments to compare the performance of these models with the deep learning baseline. For this, we use seven public datasets that were extracted from the microblogging and movie reviews domain. Finally, as a result, a statistical study confirms that the performance of these proposed models surpasses that of our original baseline on F1-Score.  相似文献   

12.
With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.  相似文献   

13.
It is well known that deep learning depends on a large amount of clean data. Because of high annotation cost, various methods have been devoted to annotating the data automatically. However, a larger number of the noisy labels are generated in the datasets, which is a challenging problem. In this paper, we propose a new method for selecting training data accurately. Specifically, our approach fits a mixture model to the per-sample loss of the raw label and the predicted label, and the mixture model is utilized to dynamically divide the training set into a correctly labeled set, a correctly predicted set, and a wrong set. Then, a network is trained with these sets in the supervised learning manner. Due to the confirmation bias problem, we train the two networks alternately, and each network establishes the data division to teach the other network. When optimizing network parameters, the labels of the samples fuse respectively by the probabilities from the mixture model. Experiments on CIFAR-10, CIFAR-100 and Clothing1M demonstrate that this method is the same or superior to the state-of-the-art methods.  相似文献   

14.
In pattern classification, it is needed to efficiently treat not only feature vectors but also feature matrices defined as two-way data, while preserving the two-way structure such as spatio-temporal relationships. The classifier for the feature matrix is generally formulated in a bilinear form composed of row and column weights which jointly result in a matrix weight. The rank of the matrix should be low from the viewpoint of generalization performance and computational cost. For that purpose, we propose a low-rank bilinear classifier based on the efficient convex optimization. In the proposed method, the classifier is optimized by minimizing the trace norm of the classifier (matrix) to reduce the rank without any hard constraint on it. We formulate the optimization problem in a tractable convex form and provide the procedure to solve it efficiently with the global optimum. In addition, we propose two novel extensions of the bilinear classifier in terms of multiple kernel learning and cross-modal learning. Through kernelizing the bilinear method, we naturally induce a novel multiple kernel learning. The method integrates both the inter kernels between heterogeneous reproducing kernel Hilbert spaces (RKHSs) and the ordinary kernels within respective RKHSs into a new discriminative kernel in a unified manner using the bilinear model. Besides, for cross-modal learning, we consider to map into the common space the multi-modal features which are subsequently classified in that space. We show that the projection and the classification are jointly represented by the bilinear model, and then propose the method to optimize both of them simultaneously in the bilinear framework. In the experiments on various visual classification tasks, the proposed methods exhibit favorable performances compared to the other methods.  相似文献   

15.
多标记学习是针对一个实例同时与一组标签相关联而提出的一种机器学习框架,是该领域研究热点之一,降维是多标记学习一个重要且具有挑战性的工作。针对有监督的多标记维数约简方法,提出一种无监督自编码网络的多标记降维方法。首先,通过构建自编码神经网络,对输入数据进行编码和解码输出;然后,引入稀疏约束计算总体成本,使用梯度下降法进行迭代求解;最后,通过深度学习训练获得自编码网络学习模型,提取数据特征实现维数约简。实验中使用多标记算法ML-kNN做分类器,在6个公开数据集上与其他4种方法对比。实验结果表明,该方法能够在不使用标记的情况下有效提取特征,降低多标记数据维度,稳定提高多标记学习性能。  相似文献   

16.
研究表明,端学习机和判别性字典学习算法在图像分类领域极具有高效和准确的优势。然而,这两种方法也具有各自的缺点,极端学习机对噪声的鲁棒性较差,判别性字典学习算法在分类过程中耗时较长。为统一这种互补性以提高分类性能,文中提出了一种融合极端学习机的判别性分析字典学习模型。该模型利用迭代优化算法学习最优的判别性分析字典和极端学习机分类器。为验证所提算法的有效性,利用人脸数据集进行分类。实验结果表明,与目前较为流行的字典学习算法和极端学习机相比,所提算法在分类过程中具有更好的效果。  相似文献   

17.
In this paper we propose a deep learning solution to age estimation from a single face image without the use of facial landmarks and introduce the IMDB-WIKI dataset, the largest public dataset of face images with age and gender labels. If the real age estimation research spans over decades, the study of apparent age estimation or the age as perceived by other humans from a face image is a recent endeavor. We tackle both tasks with our convolutional neural networks (CNNs) of VGG-16 architecture which are pre-trained on ImageNet for image classification. We pose the age estimation problem as a deep classification problem followed by a softmax expected value refinement. The key factors of our solution are: deep learned models from large data, robust face alignment, and expected value formulation for age regression. We validate our methods on standard benchmarks and achieve state-of-the-art results for both real and apparent age estimation.  相似文献   

18.
Manifold learning methods for unsupervised nonlinear dimensionality reduction have proven effective in the visualization of high dimensional data sets. When dealing with classification tasks, supervised extensions of manifold learning techniques, in which class labels are used to improve the embedding of the training points, require an appropriate method for out-of-sample mapping.In this paper we propose multi-output kernel ridge regression (KRR) for out-of-sample mapping in supervised manifold learning, in place of general regression neural networks (GRNN) that have been adopted by previous studies on the subject. Specifically, we consider a supervised agglomerative variant of Isomap and compare the performance of classification methods when the out-of-sample embedding is based on KRR and GRNN, respectively. Extensive computational experiments, using support vector machines and k-nearest neighbors as base classifiers, provide statistical evidence that out-of-sample mapping based on KRR consistently dominates its GRNN counterpart, and that supervised agglomerative Isomap with KRR achieves a higher accuracy than direct classification methods on most data sets.  相似文献   

19.
In this paper, we consider the multi-task metric learning problem, i.e., the problem of learning multiple metrics from several correlated tasks simultaneously. Despite the importance, there are only a limited number of approaches in this field. While the existing methods often straightforwardly extend existing vector-based methods, we propose to couple multiple related metric learning tasks with the von Neumann divergence. On one hand, the novel regularized approach extends previous methods from the vector regularization to a general matrix regularization framework; on the other hand and more importantly, by exploiting von Neumann divergence as the regularization, the new multi-task metric learning method has the capability to well preserve the data geometry. This leads to more appropriate propagation of side-information among tasks and provides potential for further improving the performance. We propose the concept of geometry preserving probability and show that our framework encourages a higher geometry preserving probability in theory. In addition, our formulation proves to be jointly convex and the global optimal solution can be guaranteed. We have conducted extensive experiments on six data sets (across very different disciplines), and the results verify that our proposed approach can consistently outperform almost all the current methods.  相似文献   

20.
Liu  Bo  Liu  Qian  Xiao  Yanshan 《Applied Intelligence》2022,52(3):2465-2479

Positive and unlabeled learning (PU learning) has been studied to address the situation in which only positive and unlabeled examples are available. Most of the previous work has been devoted to identifying negative examples from the unlabeled data, so that the supervised learning approaches can be applied to build a classifier. However, for the remaining unlabeled data, they either exclude them from the learning phase or force them to belong to a class, and this always limits the performance of PU learning. In addition, previous PU methods assume the training data and the testing data have the same features representations. However, we can always collect the features that the training data have while the test data do not have, these kinds of features are called privileged information. In this paper, we propose a new method, which is based on similarity approach for the problem of positive and unlabeled learning with privileged information (SPUPIL), which consists of two steps. The proposed SPUPIL method first conducts KNN method to generate the similarity weights and then the similarity weights and privileged information are incorporated to the learning model based on Ranking SVM to build a more accurate classifier. We also use the Lagrangian method to transform the original model into its dual problem, and solve it to obtain the classifier. Extensive experiments on the real data sets show that the performance of the SPUPIL is better than the state-of-the-art PU learning methods.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号