首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   79篇
  免费   8篇
  国内免费   7篇
综合类   11篇
化学工业   2篇
轻工业   1篇
无线电   1篇
一般工业技术   2篇
冶金工业   1篇
自动化技术   76篇
  2023年   2篇
  2022年   1篇
  2020年   1篇
  2019年   1篇
  2018年   2篇
  2016年   2篇
  2015年   3篇
  2014年   4篇
  2013年   5篇
  2012年   4篇
  2011年   4篇
  2010年   4篇
  2009年   6篇
  2008年   4篇
  2007年   8篇
  2006年   5篇
  2005年   5篇
  2004年   8篇
  2003年   5篇
  2002年   3篇
  2000年   6篇
  1999年   3篇
  1997年   2篇
  1995年   2篇
  1994年   1篇
  1991年   2篇
  1990年   1篇
排序方式: 共有94条查询结果,搜索用时 15 毫秒
1.
以有效分析和挖掘网络产品评论中的用户观点从而为消费者和商家均提供有价值的信息为目的,提出了网络产品评论挖掘的步骤和方法,并在用户产品评论分析的基础上,进一步对产品特征词的关注度和极性进行分析,实现了更加全面地产品评论挖掘.最后以iphone 4s为例对所提出的方法进行了实验,验证了该方法的可行性.  相似文献   
2.
This paper deals with verb-verb morphological disambiguation of two different verbs that have the same inflected form. The verb-verb morphological ambiguity (VVMA) is one of the critical Korean parts of speech (POS) tagging issues. The recognition of verb base forms related to ambiguous words highly depends on the lexical information in their surrounding contexts and the domains they occur in. However, current probabilistic morpheme-based POS tagging systems cannot handle VVMA adequately since most of them have a limitation to reflect a broad context of word level, and they are trained on too small amount of labeled training data to represent sufficient lexical information required for VVMA disambiguation.In this study, we suggest a classifier based on a large pool of raw text that contains sufficient lexical information to handle the VVMA. The underlying idea is that we automatically generate the annotated training set applicable to the ambiguity problem such as VVMA resolution via unlabeled unambiguous instances which belong to the same class. This enables to label ambiguous instances with the knowledge that can be induced from unambiguous instances. Since the unambiguous instances have only one label, the automatic generation of their annotated corpus are possible with unlabeled data.In our problem, since all conjugations of irregular verbs do not lead to the spelling changes that cause the VVMA, a training data for the VVMA disambiguation are generated via the instances of unambiguous conjugations related to each possible verb base form of ambiguous words. This approach does not require an additional annotation process for an initial training data set or a selection process for good seeds to iteratively augment a labeling set which are important issues in bootstrapping methods using unlabeled data. Thus, this can be strength against previous related works using unlabeled data. Furthermore, a plenty of confident seeds that are unambiguous and can show enough coverage for learning process are assured as well.We also suggest a strategy to extend the context information incrementally with web counts only to selected test examples that are difficult to predict using the current classifier or that are highly different from the pre-trained data set.As a result, automatic data generation and knowledge acquisition from unlabeled text for the VVMA resolution improved the overall tagging accuracy (token-level) by 0.04%. In practice, 9-10% out of verb-related tagging errors are fixed by the VVMA resolution whose accuracy was about 98% by using the Naïve Bayes classifier coupled with selective web counts.  相似文献   
3.
通过分析自建小型语料库在英语演讲教学中应用的理论基础,结合课堂中的实际应用,着重阐明自建小型语料库在英语演讲教学中提高学生演讲稿写作能力的优势,探讨一种新的英语演讲教学途径.  相似文献   
4.
Technical-term translation represents one of the most difficult tasks for human translators since (1) most translators are not familiar with terms and domain-specific terminology and (2) such terms are not adequately covered by printed dictionaries. This paper describes an algorithm for translating technical words and terms from noisy parallel corpora across language groups. Given any word which is part of a technical term in the source language, the algorithm produces a ranked candidate match for it in the target language. Potential translations for the term are compiled from the matched words and are also ranked. We show how this ranked list helps translators in technical-term translation. Most algorithms for lexical and term translation focus on Indo-European language pairs, and most use a sentence-aligned clean parallel corpus without insertion, deletion or OCR noise. Our algorithm is language- and character-set-independent, and is robust to noise in the corpus. We show how our algorithm requires minimum preprocessing and is able to obtain technical-word translations without sentence-boundary identification or sentence alignment, from the English–Japanese awk manual corpus with noise arising from text insertions or deletions and on the English–Chinese HKUST bilingual corpus. We obtain a precision of 55.35% from the awk corpus for word translation including rare words, counting only the best candidate and direct translations. Translation precision of the best-candidate translation is 89.93% from the HKUST corpus. Potential term translations produced by the program help bilingual speakers to get a 47% improvement in translating technical terms.  相似文献   
5.
Sentence alignment using P-NNT and GMM   总被引:2,自引:0,他引:2  
Parallel corpora have become an essential resource for work in multilingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross-language information retrieval and machine translation applications. In this paper, we present two new approaches to align English–Arabic sentences in bilingual parallel corpora based on probabilistic neural network (P-NNT) and Gaussian mixture model (GMM) classifiers. A feature vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuation score, and cognate score values. A set of manually prepared training data was assigned to train the probabilistic neural network and Gaussian mixture model. Another set of data was used for testing. Using the probabilistic neural network and Gaussian mixture model approaches, we could achieve error reduction of 27% and 50%, respectively, over the length based approach when applied on a set of parallel English–Arabic documents. In addition, the results of (P-NNT) and (GMM) outperform the results of the combined model which exploits length, punctuation and cognates in a dynamic framework. The GMM approach outperforms Melamed and Moore’s approaches too. Moreover these new approaches are valid for any languages pair and are quite flexible since the feature vector may contain more, less or different features, such as a lexical matching feature and Hanzi characters in Japanese–Chinese texts, than the ones used in the current research.  相似文献   
6.
7.
8.
This paper describes a framework for modeling the machine transliteration problem. The parameters of the proposed model are automatically acquired through statistical learning from a bilingual proper name list. Unlike previous approaches, the model does not involve the use of either a pronunciation dictionary for converting source words into phonetic symbols or manually assigned phonetic similarity scores between source and target words. We also report how the model is applied to extract proper names and corresponding transliterations from parallel corpora. Experimental results show that the average rates of word and character precision are 93.8% and 97.8%, respectively.  相似文献   
9.
基于实例的机器翻译系统需要双语句对的支持。为大量获取双语句对,则需要以篇章对齐的双语文本为输入,实现句子的自动对齐。通过分析汉英双语法律文本的特征,提出了法律文本对齐假设。首先识别出法规源文和译文中的结构标识和句子,然后在句子一级对齐法律文本。该方法在150篇汉英法律文本语料上,取得了80.98%的对齐准确率。  相似文献   
10.
无双语词典的英汉词对齐   总被引:7,自引:0,他引:7  
该文提出了一种基于语料库的无双语词典的英汉词对齐模型.它把自然语言的句子形式化地表示为集合,通过集合的交运算和差运算实现单词对齐,同时还考虑了词序和重复词的影响.该模型不仅能对齐高频单词,而且能对齐低频单词,对未登录词和汉语分词错误具有兼容能力.该模型几乎不需要任何语言学知识和语言学资源,使语料库方法可独立应用.实验表明,同质语料规模越大.词对齐的正确率和召回率越高.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号