首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
MIS智能接口中汉语分词系统的设计与应用   总被引:4,自引:0,他引:4  
提供汉语检索接口是MIS应用的一大趋势,其主要困难在于如何让计算机理解汉语检索用语,为此本文构建了MIS智能检索接口中的汉语分词系统,并提出了分词策略。对汉语切分中的歧义问题进行了深入的探讨,应用互信息和t-信息差完成了消歧算法的设计。实验表明,该系统具有较高的切分正确率与效率。  相似文献   

2.
Approaches to reduce the effects of OOV queries on indexed spoken audio   总被引:1,自引:0,他引:1  
We present several novel approaches to the Out of Vocabulary (OOV) query problem for spoken audio: indexing based on syllable-like units called particles and query expansion according to acoustic confusability for a word index. We also examine linear and OOV-based combination of indexing schemes. We experiment on 75 h of broadcast news, comparing our techniques to a word index, a phoneme index and a phoneme index queried with phoneme sequences. Our results show that our approaches are superior to both a word index and a phoneme index for OOV words, and have comparable performance to the sequence of phonemes scheme. The particle system has worse performance than the acoustic query expansion scheme. The best system uses word queries for in-vocabulary words and a linear combination of the phoneme sequence scheme and acoustic query expansion for OOV words. Using the best possible weights for linear combination, this system improves the average precision from 0.35 for a word index to 0.40, a result only obtainable if the weights could be learnt on a development query set. The next best system used a word index for in-vocabulary words and the phoneme sequence system otherwise and had average precision of 0.39.  相似文献   

3.
This paper proposes a fuzzy classification system to perform word indexing in ancient printed documents. The indexing system receives a given word selected by an user. The word is preprocessed using an aspect ratio filter, assuring that only interesting word candidates are considered. The image is classified by oriented feature extraction using Gabor filter banks. The oriented features are used to generate membership functions that characterize the selected word. This target word image is then compared to the potential matches, using a similarity matrix. The indexing system is flexible and lightweight when compared to other optimal recognizers, which allows its use in "real-time" applications. A significant test revealed that the indexer achieved very good results in terms of precision and recall in texts from XVIIth century.  相似文献   

4.
This paper propsed a novel text representation and matching scheme for Chinese text retrieval.At present,the indexing methods of chinese retrieval systems are either character-based or word-based.The character-based indexing methods,such as bi-gram or tri-gram indexing,have high false drops due to the mismatches between queries and documents.On the other hand,it‘s difficult to efficiently identify all the proper nouns,terminology of different domains,and phrases in the word-based indexing systems.The new indexing method uses both proximity and mutual information of the word paris to represent the text content so as to overcome the high false drop,new word and phrase problems that exist in the character-based and word-based systems.The evaluation results indicate that the average query precision of proximity-based indexing is 5.2% higher than the best results of TREC-5.  相似文献   

5.
汉语词典查询是中文信息处理系统的重要基础部分, 对系统效率有重要的影响. 国内自80年代中后期就开展了中文分词词典机制的研究, 为了提高现有基于词典的分词机制的查询效率, 对于词长不超过4字的词提出了一种全新的分词词典机制——基于汉字串进制值的拉链式哈希机制即词值哈希机制. 对每个汉字的机内码从新编码, 利用进制原理, 计算出一个词语的词值, 建立一个拉链式词值哈希机制, 从而提高查询匹配速度.  相似文献   

6.
基于反馈规则学习的医学文献主题自动标引系统   总被引:1,自引:0,他引:1  
该文就中医药文献的自动标引研究,提出并开发了一个基于规则学习的主题自动标引系统。该系统从文献的题名中抽取并识别主题模式,相当有效地解决了医学科技文献的自动标引中涉及主/副题词的组配问题,并避免了基于词频处理的自动标引中存在的中文分词的障碍。开发完成的自动标引系统初期版本在大量中医药文献中进行了实验,取得了很好的结果,具备一定的实用性。  相似文献   

7.
基于中文题名的计算机辅助标引   总被引:1,自引:0,他引:1  
本文阐述了基于中文文献题名的计算机辅助标引系统的组成结构,并讨论了其中的一些关键技术问题,文章从系统结构设计方面,对该系统的建表模块,目录模块,分词标模块,校对模块,选号打印模块和系统管理模块进行了讨论,并着重讨论了分词标引技术。  相似文献   

8.
本文将部分语义信息加入到二元文法中,提出改进的二元文法索引策略。本文应用2-泊松模型的BM25公式在TREC公开数据集上进行了测试。实验表明,改进的二元文法索引策略与基于字的索引策略、基于词的索引策略和基于二元文法的索引策略对比,在主要性能评测参数平均精确率、R-精确率参数上相对较优。  相似文献   

9.
基于朝鲜语信息检索系统的深入分析,研究提高朝鲜语信息检索性能的索引问题。通过剖析名词单位索引法、单位词素索引法、n-gram单位索引法、单位语句索引法等经典索引法的优缺点,以试验分析找出对索引性能有重要影响的关键要素,深入阐述朝鲜语的30个非用词、索引方式与朝鲜语的特征,从而提出一种新的将每种索引方法特征融于一体的朝鲜语信息检索索引方法。仿真实验表明,所提出的新方法具有更好的性能。  相似文献   

10.
We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of self organizing maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.  相似文献   

11.
概率潜在语义检索模型使用统计的方法建立“文档—潜在语义一词”之间概率分布关系并利用这种关系进行检索。本文比较了在概率潜在语义检索模型中不同中文索引技术对检索效果的影响,考察了基于分词、二元和关键词抽取三种不同的索引技术,并和向量空间模型作了对比分析。实验结果表明:在概率潜在语义检索模型中,词的正确切分能提高检索的平均精度。  相似文献   

12.
基于字表的中文搜索引擎分词系统的设计与实现   总被引:9,自引:0,他引:9  
丁承  邵志清 《计算机工程》2001,27(2):191-192,F003
分析了常用的基于词典的汉语分词方法用于中文搜索引擎开发中的不足,提出基于字表的中文搜索引擎分词系统,并在索引,查询,排除歧义等方面进行了设计和实现。  相似文献   

13.
该文提出了一种基于统计和浅层语言分析的维吾尔文语义串快速抽取方法,采用一种多层动态索引结构为大规模文本建词索引,结合维吾尔文词间关联规则采用一种改进的n元递增算法进行词串扩展并发现文本中的可信频繁模式,最终依次判断频繁模式串结构完整性从而得到语义串。通过在不同规模的语料上实验发现,该方法可行有效, 能够应用到维吾尔文文本挖掘多个领域。  相似文献   

14.
经分析研究开源的Lucene系统架构以及特殊xml数据源,针对Lucene搜索得分公式的不足,提出了结合词项位置和二次检索的公式,设计一种文本搜索系统;并以提高检索性能、相似性搜索的准确率、索引的空间效率和支持查询的时间效率为目标进行实验,最后通过部署Tomcat服务器实现.经实验验证,改进的系统较之于原Lucene系统提高了建立索引效率、查询效率、准确率.  相似文献   

15.
16.
针对微博文本的特点,提出了一种自动识别微博标引词的方法。根据微博文本中的名词或动词之间语义相似度构造图的邻接矩阵,在图的邻接矩阵基础上利用Pagerank算法思想来计算词语的重要度,选择重要度较大的一些词作为标引词。实验结果表明,较传统的自动标引方法,提出的自动标引方法简单实用、准确率较高。  相似文献   

17.

The internet changed the way that people communicate, and this has led to a vast amount of Text that is available in electronic format. It includes things like e-mail, technical and scientific reports, tweets, physician notes and military field reports. Providing key-phrases for these extensive text collections thus allows users to grab the essence of the lengthy contents quickly and helps to locate information with high efficiency. While designing a Keyword Extraction and Indexing system, it is essential to pick unique properties, called features. In this article, we proposed different unsupervised keyword extraction approaches, which is independent of the structure, size and domain of the documents. The proposed method relies on the novel and cognitive inspired set of standard, phrase, word embedding and external knowledge source features. The individual and selected feature results are reported through experimentation on four different datasets viz. SemEval, KDD, Inspec, and DUC. The selected (feature selection) and word embedding based features are the best features set to be used for keywords extraction and indexing among all mentioned datasets. That is the proposed distributed word vector with additional knowledge improves the results significantly over the use of individual features, combined features after feature selection and state-of-the-art. After successfully achieving the objective of developing various keyphrase extraction methods we also experimented it for document classification task.

  相似文献   

18.
The set of references that typically appear toward the end of journal articles is sometimes, though not always, a field in bibliographic (citation) databases. But even if references do not constitute such a field, they can be useful as a preprocessing step in the automated extraction of other bibliographic data from articles, as well as in computer-assisted indexing of articles. Automation in data extraction and indexing to minimize human labor is key to the affordable creation and maintenance of large bibliographic databases. Extracting the components of references, such as author names, article title, journal name, publication date and other entities, is therefore a valuable and sometimes necessary task. This paper describes a two-step process using statistical machine learning algorithms, to first locate the references in HTML medical articles and then to parse them. Reference locating identifies the reference section in an article and then decomposes it into individual references. We formulate this step as a two-class classification problem based on text and geometric features. An evaluation conducted on 500 articles drawn from 100 medical journals achieves near-perfect precision and recall rates for locating references. Reference parsing identifies the components of each reference. For this second step, we implement and compare two algorithms. One relies on sequence statistics and trains a Conditional Random Field. The other focuses on local feature statistics and trains a Support Vector Machine to classify each individual word, followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The overall performance of these two reference-parsing algorithms is about the same: above 99% accuracy at the word level, and over 97% accuracy at the chunk level.  相似文献   

19.
This publication shows how the gap between the HTML based internet and the RDF based vision of the semantic web might be bridged, by linking words in texts to concepts of ontologies. Most current search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. However, the indexes do not contain synonyms, cannot differentiate between homonyms (‘mouse’ as a pointing vs. ‘mouse’ as an animal) and users receive different search results when they use different conjugation forms of the same word. In this publication, we present a system that uses ontologies and Natural Language Processing techniques to index texts, and thus supports word sense disambiguation and the retrieval of texts that contain equivalent words, by indexing them to concepts of ontologies.

For this purpose, we developed fully automated methods for mapping equivalent concepts of imported RDF ontologies (for this prototype WordNet, SUMO and OpenCyc). These methods will thus allow the seamless integration of domain specific ontologies for concept based information retrieval in different domains.

To demonstrate the practical workability of this approach, a set of web pages that contain synonyms and homonyms were indexed and can be queried via a search engine like query frontend. However, the ontology based indexing approach can also be used for other data mining applications such text clustering, relation mining and for searching free text fields in biological databases. The ontology alignment methods and some of the text mining principles described in this publication are now incorporated into the ONDEX system http://ondex.sourceforge.net/.  相似文献   


20.
汉语自动分词词典机制的实验研究   总被引:70,自引:4,他引:66  
分词词典是汉语自动分词系统的一个基本组成部分。其查询速度直接影响到分词系统的处理速度。本文设计并通过实验考察了三种典型的分词词典机制:整词二分、TRIE索引树及逐字二分,着重比较了它们的时间、空间效率。实验显示:基于逐字二分的分词词典机制简洁、高效,较好地满足了实用型汉语自动分词系统的需要。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号