首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
相比于单一语言的短文本情感分类而言,混合语言由于其表达情感的单词语言不唯一,语法结构复杂,仅使用传统词嵌入的方法无法使分类器学到足够有用的特征,导致分类效果不佳。针对这些问题,提出一种融合字词特征的双通道复合模型。首先,针对数据集不平衡问题,提出一种基于Bert语义相似度的数据集欠采样算法;其次,构建双通道深度学习网络,分别将以字、词方式嵌入的原始数据通过两个通道送入CNN和带有注意力机制的LSTM组成的模块中进行多粒度特征提取;最后融合多通道的特征进行分类。在NLPCC2018任务1公布的混合语言五分类数据集上的实验表明,该模型的整体性能较目前有代表性的深度学习模型有进一步提高。  相似文献   

2.
In this paper we develop a formalization of semantic relations that facilitates efficient implementations of relations in lexical databases or knowledge representation systems using bases. The formalization of relations is based on a modeling of hierarchical relations in Formal Concept Analysis. Further, relations are analyzed according to Relational Concept Analysis, which allows a representation of semantic relations consisting of relational components and quantificational tags. This representation utilizes mathematical properties of semantic relations. The quantificational tags imply inheritance rules among semantic relations that can be used to check the consistency of relations and to reduce the redundancy in implementations by storing only the basis elements of semantic relations. The research presented in this paper is an example of an application of Relational Concept Analysis to lexical databases and knowledge representation systems (cf. Priss 1996) which is part of a larger framework of research on natural language analysis and formalization.  相似文献   

3.
基于核方法的Web挖掘研究   总被引:2,自引:0,他引:2  
基于词空间的分类方法很难处理文本的高维特性和捕获文本语义概念.利用核主成分分析和支持向量机。提出一种通过约简文本数据维数抽取语义概念、基于语义概念进行文本分类的新方法.首先将文档映射到高维线性特征空间消除非线性特征,然后在映射空间中通过主成分分析消除变量之间的相关性,实现降维和语义概念抽取,得到文档的语义概念空间,最后在语义概念空间中采用支持向量机进行分类.通过新定义的核函数,不必显式实现到语义概念空间的映射,可在原始文档向量空间中直接实现基于语义概念的分类.利用核化的GHA方法自适应迭代求解核矩阵的特征向量和特征值,适于求解大规模的文本分类问题.试验结果表明该方法对于改进文本分类的性能具有较好的效果.  相似文献   

4.
在文本分类研究中,向量空间模型具有表示形式简单的特点,但只能表示特征词的词频信息而忽视了特征词间的结构信息和语义语序信息,所以可能导致不同文档被表示为相同向量。针对这种问题,本文采用图结构模型表示文本,把文本表示成一个有向图(简称文本图),可有效解决结构化信息缺失的问题。本文将图核技术应用于文本分类,提出适用于文本图之间的相似度计算的图核算法--间隔通路核,然后利用支持向量机对文本进行分类。在文本集上的实验结果表明:与向量空间模型相比,间隔通路核相比于其他核函数的分类准确率更高,所以间隔通路核是一种很好的图结构相似性计算算法,能广泛应用于文本分类中。  相似文献   

5.
文本分类有着广泛的应用,对其分类算法的研究也一直备受关注。但是,传统文本分类算法普遍存在文本特征向量化维度过高、没有考虑关键词之间语义关系、训练参数过多等问题,这些都将影响到分类准确率等性能。针对这些问题,提出了一种结合词向量化与GRU的文本分类算法。对文本进行预处理操作;通过GloVe进行词向量化,尽可能多地蕴含文本语义和语法信息,同时降低向量空间维度;再利用GRU神经网络模型进行训练,最大程度保留长文本中长距离词之间的语义关联。实验结果证明,该算法对提高文本分类性能有较明显的作用。  相似文献   

6.
极限多标签文本分类任务具有标签集大、类间关系复杂、数据分布不平衡等特点,是具有挑战性的研究热点。现有模型对标签语义信息利用不足,性能有限。对此,该文提出一种利用层级标签语义信息引导的极限多标签文本分类模型提升策略,在训练和预测过程中给予模型层级标签引导的弱监督语义指导信息,利用这种弱监督信息规约多标签文本分类任务中要对应的多标签语义边界。在标准数据集上的实验结果表明,该文所提策略能够有效提升现有模型性能,尤其在短文本数据集中增效显著,宏精准率最高提升21.23%。  相似文献   

7.
针对档案领域的短文本分类,设计一种基于概念网络的自动分类方法。通过分析领域内短文本的语言特点构建领域本体,利用自然语言处理技术将短文本转化为资源描述框架表示的结构化概念网络,在此基础上定义概念网络间的语义相似度,从而实现档案的自动分类。实验结果表明,相比传统基于特征选择的短文本分类方法,该方法的分类错误率下降了24.2%,可有效改善系统性能。  相似文献   

8.
A novel Italian Sign Language MultiWordNet (LMWN), which integrates the MultiWordNet (MWN) lexical database with the Italian Sign Language (LIS), is presented in this paper. The approach relies on LIS lexical resources which support and help to search for Italian lemmas in the database and display corresponding LIS signs. The lexical frequency analysis of the lexicon and some newly created signs approved by expert LIS signers are also discussed. The larger MWN database helps to enrich the variety and comprehensiveness of the lexicon. We also describe the approach which links the Italian lemmas and LIS signs to extract and display bilingual information from the collected lexicon and the semantic relationships of LIS Signs with MWN. The users can view the meanings of almost one fourth of the lemmas of MWN in LIS.  相似文献   

9.
Since a decade, text categorization has become an active field of research in the machine learning community. Most of the approaches are based on the term occurrence frequency. The performance of such surface-based methods can decrease when the texts are too complex, i.e., ambiguous. One alternative is to use the semantic-based approaches to process textual documents according to their meaning. Furthermore, research in text categorization has mainly focused on “flat texts” whereas many documents are now semi-structured and especially under the XML format. In this paper, we propose a semantic kernel for semi-structured biomedical documents. The semantic meanings of words are extracted using the unified medical language system (UMLS) framework. The kernel, with a SVM classifier, has been applied to a text categorization task on a medical corpus of free text documents. The results have shown that the semantic kernel outperforms the linear kernel and the naive Bayes classifier. Moreover, this kernel was ranked in the top 10 of the best algorithms among 44 classification methods at the 2007 Computational Medicine Center (CMC) Medical NLP International Challenge.  相似文献   

10.
Sentence and short-text semantic similarity measures are becoming an important part of many natural language processing tasks, such as text summarization and conversational agents. This paper presents SyMSS, a new method for computing short-text and sentence semantic similarity. The method is based on the notion that the meaning of a sentence is made up of not only the meanings of its individual words, but also the structural way the words are combined. Thus, SyMSS captures and combines syntactic and semantic information to compute the semantic similarity of two sentences. Semantic information is obtained from a lexical database. Syntactic information is obtained through a deep parsing process that finds the phrases in each sentence. With this information, the proposed method measures the semantic similarity between concepts that play the same syntactic role. Psychological plausibility is added to the method by using previous findings about how humans weight different syntactic roles when computing semantic similarity. The results show that SyMSS outperforms state-of-the-art methods in terms of rank correlation with human intuition, thus proving the importance of syntactic information in sentence semantic similarity computation.  相似文献   

11.
隐含语义索引及其在中文文本处理中的应用研究   总被引:33,自引:0,他引:33  
信息检索本质上是语义检索,而传统信息检索系统都是基于独立词索引,因此检索效果并不理想,隐含语义索引是一种新型的信息检索模型,它通过奇异值分析,将词向量和文档向量投影到一个低维空间,消减了词和文档之间的语义模糊度,使得文档之间的语义关系更为明晰。实验和理论结果证实了隐含语义索引能够取得更好的检索效果。本文论述了隐含语义索引的理论基础,研究了隐含语义索引在中文文本处理中的应用,包括中文文本检索、中文文本分类和中文文本聚类等。  相似文献   

12.
Exploiting semantic resources for large scale text categorization   总被引:1,自引:0,他引:1  
The traditional supervised classifier for Text Categorization (TC) is learned from a set of hand-labeled documents. However, the task of manual data labeling is labor intensive and time consuming, especially for a complex TC task with hundreds or thousands of categories. To address this issue, many semi-supervised methods have been reported to use both labeled and unlabeled documents for TC. But they still need a small set of labeled data for each category. In this paper, we propose a Fully Automatic Categorization approach for Text (FACT), where no manual labeling efforts are required. In FACT, the lexical databases serve as semantic resources for category name understanding. It combines the semantic analysis of category names and statistic analysis of the unlabeled document set for fully automatic training data construction. With the support of lexical databases, we first use the category name to generate a set of features as a representative profile for the corresponding category. Then, a set of documents is labeled according to the representative profile. To reduce the possible bias originating from the category name and the representative profile, document clustering is used to refine the quality of initial labeling. The training data are subsequently constructed to train the discriminative classifier. The empirical experiments show that one variant of our FACT approach outperforms the state-of-the-art unsupervised TC approach significantly. It can achieve more than 90% of F1 performance of the baseline SVM methods, which demonstrates the effectiveness of the proposed approaches.  相似文献   

13.
Blog mining addresses the problem of mining information from blog data. Although mining blogs may share many similarities to Web and text documents, existing techniques need to be reevaluated and adapted for the multidimensional representation of blog data, which exhibit dimensions not present in traditional documents, such as tags. Blog tags are semantic annotations in blogs which can be valuable sources of additional labels for the myriad of blog documents. In this paper, we present a tag-topic model for blog mining, which is based on the Author-Topic model and Latent Dirichlet Allocation. The tag-topic model determines the most likely tags and words for a given topic in a collection of blog posts. The model has been successfully implemented and evaluated on real-world blog data.  相似文献   

14.
黄定琦  史晟辉 《计算机应用研究》2020,37(6):1724-1728,1754
汉语语言在书面表达时不具有天然分词的特性,词汇与词汇之间没有分词标记,因此在汉语文本的识别中需结合其行文的习惯及规则,即所谓的词汇特征。已有研究通常在实验中显式地标注词汇特征来提高识别效果,增加了人工处理流程,极大地加重了算法移植的工作量。研究并归纳了常用汉语语言的词汇特征,并利用条件随机场(conditional random fields,CRF)的特征提取能力,自行实现了复杂特征函数,在语料只具有简单标注的前提下,隐式地提取词汇特征,提高了识别效果。实验证明,在汉语分词中应用复杂词汇特征能有效提高识别性能,提供了在应用中提高识别算法可移植性的新思路。  相似文献   

15.
Structure analysis of table form documents is an important issue because a printed document and even an electronic document do not provide logical structural information but merely geometrical layout and lexical information. To handle these documents automatically, logical structure information is necessary. In this paper, we first analyze the elements of the form documents from a communication point of view and retrieve the grammatical elements that appear in them. Then, we present a document structure grammar which governs the logical structure of the form documents. Finally, we propose a structure analysis system of the table form documents based on the grammar. By using grammar notation, we can easily modify and keep it consistent, as the rules are relatively simple. Another advantage of using grammar notation is that it can be used for generating documents only from logical structure. In our system, documents are assumed to be composed of a set of boxes and they are classified as seven box types. Then the box relations between the indication box and its associated entry box are analyzed based on the semantic and geometric knowledge defined in the document structure grammar. Experimental results have shown that the system successfully analyzed several kinds of table forms.  相似文献   

16.
肖琳  陈博理  黄鑫  刘华锋  景丽萍  于剑 《软件学报》2020,31(4):1079-1089
自大数据蓬勃发展以来,多标签分类一直是令人关注的重要问题,在现实生活中有许多实际应用,如文本分类、图像识别、视频注释、多媒体信息检索等.传统的多标签文本分类算法将标签视为没有语义信息的符号,然而,在许多情况下,文本的标签是具有特定语义的,标签的语义信息和文档的内容信息是有对应关系的,为了建立两者之间的联系并加以利用,提出了一种基于标签语义注意力的多标签文本分类(LAbel Semantic Attention Multi-label Classification,简称LASA)方法,依赖于文档的文本和对应的标签,在文档和标签之间共享单词表示.对于文档嵌入,使用双向长短时记忆(bi-directional long short-term memory,简称Bi-LSTM)获取每个单词的隐表示,通过使用标签语义注意力机制获得文档中每个单词的权重,从而考虑到每个单词对当前标签的重要性.另外,标签在语义空间里往往是相互关联的,使用标签的语义信息同时也考虑了标签的相关性.在标准多标签文本分类的数据集上得到的实验结果表明,所提出的方法能够有效地捕获重要的单词,并且其性能优于当前先进的多标签文本分类算法.  相似文献   

17.
Learning can benefit from the modern Web structure through the convergence of top‐down encyclopedic institutional knowledge and bottom‐up user‐generated annotations. A promising approach to such convergence consists in leveraging the social functionalities in 3.0 executable environments through the recommendation of tags with the mediation of lexical and semantic resources. This paper addresses such issues through the design and evaluation of a tag recommendation system in a Web 3.0 Web portal, ‘150 Digit’. Designed for schools, 150 Digit encourages students and teachers to interact with a set of four exhibitions on the historical and social aspects of the Italian unification process in a virtual environment. The website displays the exhibits and their related documents promoting the users' active participation through tagging, voting and commenting on the exhibits. Tags become a way for students to create and explore new relations among the site contents, orthogonal to the institutional viewpoint. In this paper, we illustrate the recommendation strategy incorporated in 150 Digit, which relies on a semantic middleware to mediate between the input expressed by the users through tags and the top‐down institutional classification provided by the curators of the exhibitions. Following this, we describe the evaluation process conducted in a real experimental setting and discuss the evaluation results and their implications for learning environments.  相似文献   

18.
Integrating XML and databases   总被引:1,自引:0,他引:1  
  相似文献   

19.
20.
Automatic image tagging automatically assigns image with semantic keywords called tags, which significantly facilitates image search and organization. Most of present image tagging approaches are constrained by the training model learned from the training dataset, and moreover they have no exploitation on other type of web resource (e.g., web text documents). In this paper, we proposed a search based image tagging algorithm (CTSTag), in which the result tags are derived from web search result. Specifically, it assigns the query image with a more comprehensive tag set derived from both web images and web text documents. First, a content-based image search technology is used to retrieve a set of visually similar images which are ranked by the semantic consistency values. Then, a set of relevant tags are derived from these top ranked images as the initial tag set. Second, a text-based search is used to retrieve other relevant web resources by using the initial tag set as the query. After the denoising process, the initial tag set is expanded with other tags mined from the text-based search result. Then, an probability flow measure method is proposed to estimate the probabilities of the expanded tags. Finally, all the tags are refined using the Random Walk with Restart (RWR) method and the top ones are assigned to the query images. Experiments on NUS-WIDE dataset show not only the performance of the proposed algorithm but also the advantage of image retrieval and organization based on the result tags.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号