首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1681篇
  免费   102篇
  国内免费   132篇
电工技术   18篇
综合类   51篇
化学工业   7篇
金属工艺   6篇
机械仪表   16篇
建筑科学   25篇
矿业工程   5篇
能源动力   3篇
轻工业   6篇
水利工程   3篇
武器工业   1篇
无线电   129篇
一般工业技术   41篇
冶金工业   17篇
自动化技术   1587篇
  2024年   4篇
  2023年   11篇
  2022年   42篇
  2021年   50篇
  2020年   52篇
  2019年   32篇
  2018年   39篇
  2017年   49篇
  2016年   58篇
  2015年   60篇
  2014年   106篇
  2013年   83篇
  2012年   112篇
  2011年   106篇
  2010年   70篇
  2009年   105篇
  2008年   133篇
  2007年   117篇
  2006年   112篇
  2005年   83篇
  2004年   74篇
  2003年   66篇
  2002年   46篇
  2001年   39篇
  2000年   35篇
  1999年   35篇
  1998年   16篇
  1997年   16篇
  1996年   12篇
  1995年   17篇
  1994年   21篇
  1993年   11篇
  1992年   7篇
  1991年   3篇
  1990年   9篇
  1989年   8篇
  1988年   2篇
  1987年   5篇
  1986年   5篇
  1985年   9篇
  1984年   4篇
  1983年   2篇
  1982年   10篇
  1981年   9篇
  1980年   9篇
  1979年   4篇
  1978年   4篇
  1977年   7篇
  1975年   3篇
  1972年   1篇
排序方式: 共有1915条查询结果,搜索用时 15 毫秒
1.
Topic modeling is a popular analytical tool for evaluating data. Numerous methods of topic modeling have been developed which consider many kinds of relationships and restrictions within datasets; however, these methods are not frequently employed. Instead many researchers gravitate to Latent Dirichlet Analysis, which although flexible and adaptive, is not always suited for modeling more complex data relationships. We present different topic modeling approaches capable of dealing with correlation between topics, the changes of topics over time, as well as the ability to handle short texts such as encountered in social media or sparse text data. We also briefly review the algorithms which are used to optimize and infer parameters in topic modeling, which is essential to producing meaningful results regardless of method. We believe this review will encourage more diversity when performing topic modeling and help determine what topic modeling method best suits the user needs.  相似文献   
2.
This paper presents an innovative solution to model distributed adaptive systems in biomedical environments. We present an original TCBR-HMM (Text Case Based Reasoning-Hidden Markov Model) for biomedical text classification based on document content. The main goal is to propose a more effective classifier than current methods in this environment where the model needs to be adapted to new documents in an iterative learning frame. To demonstrate its achievement, we include a set of experiments, which have been performed on OSHUMED corpus. Our classifier is compared with Naive Bayes and SVM techniques, commonly used in text classification tasks. The results suggest that the TCBR-HMM Model is indeed more suitable for document classification. The model is empirically and statistically comparable to the SVM classifier and outperforms it in terms of time efficiency.  相似文献   
3.
现阶段的语义解析方法大部分都基于组合语义,这类方法的核心就是词典。词典是词汇的集合,词汇定义了自然语言句子中词语到知识库本体中谓词的映射。语义解析一直面临着词典中词汇覆盖度不够的问题。针对此问题,该文在现有工作的基础上,提出了基于桥连接的词典学习方法,该方法能够在训练中自动引入新的词汇并加以学习,为了进一步提高新学习到的词汇的准确度,该文设计了新的词语—二元谓词的特征模板,并使用基于投票机制的核心词典获取方法。该文在两个公开数据集(WebQuestions和Free917)上进行了对比实验,实验结果表明,该文方法能够学习到新的词汇,提高词汇的覆盖度,进而提升语义解析系统的性能,特别是召回率。  相似文献   
4.
We present a method for recovering from syntax errors encountered during parsing. The method provides a form of minimum distance repair, has linear time complexity, and is completely automatic. A formal method is presented for evaluating the performance of error recovery methods, based on global minimum-distance error correction. The minimum-distance error recovery method achieves a theoretically best performance on 80% of Pascal programs in the weighted Ripley-Druseikis collection. Comparisons of performance with other error recovery methods are given.  相似文献   
5.
This article is based on experiences with data and text mining to gain information for strategic business decisions, using host-based analysis and visualisation (A/V), primarily in the field of patents. The potential advantages of host-based A/V are pointed out and the features of the first such A/V software, STN®AnaVist™, are described in detail. Areas covered include the user interfaces, initial set of documents for A/V, data mining, text mining, reporting, and suggestions for further development.  相似文献   
6.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
7.
文本索引词项相对权重计算方法与应用   总被引:4,自引:0,他引:4  
文本索引词权重计算方法决定了文本分类的准确率。该文提出一种文本索引词项相对权重计算方法,即文本索引词项权重根据索引词项在该文本中的出现频率与在整个文本空间出现的平均频率之间的相对值进行计算。该方法能有效地提高索引词对文本内容识别的准确性。  相似文献   
8.
Inger Lytje 《AI & Society》1996,10(2):142-163
Participatory design strategies have placed the user at the center of the design process, leaving computer software designers without guidelines for how to design and implement the software system. This paper aims to bring the designer back to the center of the design process. and the way of doing it is to consider computer software as text. Three different text theories are presented in order to explain what is meant by text, namely pragmatics, structuralism and deconstructivism. Finally it is discussed how the design processes should be understood, and how they should be organized when taking the text point of view of computer software.  相似文献   
9.
Full-text systems that access text randomly cannot normally determine the format operations in effect for a given target location. The problem can be solved by viewing the format marks as the non-terminals in a format grammar. A formatted text can then be parsed using the grammar to build a data structure that serves both as a parse tree and as a search tree. While processing a retrieved segment, a full-text system can follow the search tree from root to leaf, collecting the format marks encountered at each node to derive the sequence of commands active for that segment. The approach also supports the notion of a ‘well formatted’ document and provides a means for verifying the well-formedness of a given text. To illustrate the approach, a sample set of format marks and a sample grammar are given suitable for formatting and parsing the article as a sample text.  相似文献   
10.
Evidence from 3 experiments reveals interference effects from structural relationships that are inconsistent with any grammatical parse of the perceived input. Processing disruption was observed when items occurring between a head and a dependent overlapped with either (or both) syntactic or semantic features of the dependent. Effects of syntactic interference occur in the earliest online measures in the region where the retrieval of a long-distance dependent occurs. Semantic interference effects occur in later online measures at the end of the sentence. Both effects endure in offline comprehension measures, suggesting that interfering items participate in incorrect interpretations that resist reanalysis. The data are discussed in terms of a cue-based retrieval account of parsing, which reconciles the fact that the parser must violate the grammar in order for these interference effects to occur. Broader implications of this research indicate a need for a precise specification of the interface between the parsing mechanism and the memory system that supports language comprehension. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号