首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1718篇
  免费   84篇
  国内免费   97篇
电工技术   34篇
综合类   75篇
化学工业   32篇
金属工艺   22篇
机械仪表   35篇
建筑科学   117篇
矿业工程   18篇
能源动力   12篇
轻工业   22篇
水利工程   29篇
石油天然气   32篇
武器工业   5篇
无线电   93篇
一般工业技术   69篇
冶金工业   33篇
原子能技术   7篇
自动化技术   1264篇
  2023年   9篇
  2022年   22篇
  2021年   50篇
  2020年   49篇
  2019年   25篇
  2018年   36篇
  2017年   38篇
  2016年   60篇
  2015年   47篇
  2014年   99篇
  2013年   77篇
  2012年   110篇
  2011年   126篇
  2010年   87篇
  2009年   101篇
  2008年   121篇
  2007年   102篇
  2006年   114篇
  2005年   91篇
  2004年   95篇
  2003年   89篇
  2002年   55篇
  2001年   41篇
  2000年   48篇
  1999年   42篇
  1998年   21篇
  1997年   18篇
  1996年   17篇
  1995年   16篇
  1994年   12篇
  1993年   12篇
  1992年   11篇
  1991年   4篇
  1990年   3篇
  1989年   3篇
  1987年   5篇
  1986年   4篇
  1985年   1篇
  1984年   4篇
  1983年   1篇
  1982年   7篇
  1981年   6篇
  1980年   8篇
  1979年   1篇
  1978年   2篇
  1977年   3篇
  1974年   1篇
  1973年   1篇
  1972年   1篇
  1966年   1篇
排序方式: 共有1899条查询结果,搜索用时 269 毫秒
1.
Topic modeling is a popular analytical tool for evaluating data. Numerous methods of topic modeling have been developed which consider many kinds of relationships and restrictions within datasets; however, these methods are not frequently employed. Instead many researchers gravitate to Latent Dirichlet Analysis, which although flexible and adaptive, is not always suited for modeling more complex data relationships. We present different topic modeling approaches capable of dealing with correlation between topics, the changes of topics over time, as well as the ability to handle short texts such as encountered in social media or sparse text data. We also briefly review the algorithms which are used to optimize and infer parameters in topic modeling, which is essential to producing meaningful results regardless of method. We believe this review will encourage more diversity when performing topic modeling and help determine what topic modeling method best suits the user needs.  相似文献   
2.
This paper presents an innovative solution to model distributed adaptive systems in biomedical environments. We present an original TCBR-HMM (Text Case Based Reasoning-Hidden Markov Model) for biomedical text classification based on document content. The main goal is to propose a more effective classifier than current methods in this environment where the model needs to be adapted to new documents in an iterative learning frame. To demonstrate its achievement, we include a set of experiments, which have been performed on OSHUMED corpus. Our classifier is compared with Naive Bayes and SVM techniques, commonly used in text classification tasks. The results suggest that the TCBR-HMM Model is indeed more suitable for document classification. The model is empirically and statistically comparable to the SVM classifier and outperforms it in terms of time efficiency.  相似文献   
3.
姚强  戴鑫 《石油沥青》2006,20(3):58-62
论述了我国建筑沥青标准的变化历程及现状,对我国建筑沥青的生产、产品质量情况及现行国家标准GB/T 494-1998《建筑石油沥青》的不足之处进行了分析。提出了今后应生产的建筑沥青品种及应参照美国材料与试验协会标准ASTM D312《屋顶用沥青的标准规格》、我国石化行业标准SH/T0002-1990(1998)《防水防潮石油沥青》,修订GB/T494- 1998的建议。以进一步达到增加建筑沥青牌号,提高建筑沥青质量和使用性能的目的,满足建筑工业的发展对沥青多牌号、高质量、使用性能好的要求。  相似文献   
4.
Revising deductive knowledge and stereotypical knowledge in a student model   总被引:1,自引:1,他引:0  
A user/student model must be revised when new information about the user/student is obtained. But a sophisticated user/student model is a complex structure that contains different types of knowledge. Different techniques may be needed for revising different types of knowledge. This paper presents a student model maintenance system (SMMS) which deals with revision of two important types of knowledge in student models: deductive knowledge and stereotypical knowledge. In the SMMS, deductive knowledge is represented by justified beliefs. Its revision is accomplished by a combination of techniques involving reason maintenance and formal diagnosis. Stereotypical knowledge is represented in the Default Package Network (DPN). The DPN is a knowledge partitioning hierarchy in which each node contains concepts in a sub-domain. Revision of stereotypical knowledge is realized by propagating new information through the DPN to change default packages (stereotypes) of the nodes in the DPN. A revision of deductive knowledge may trigger a revision of stereotypical knowledge, which results in a desirable student model in which the two types of knowledge exist harmoniously.  相似文献   
5.
A revision algorithm is a learning algorithm that identifies the target concept, starting from an initial concept. Such an algorithm is considered efficient if its complexity (in terms of the resource one is interested in) is polynomial in the syntactic distance between the initial and the target concept, but only polylogarithmic in the number of variables in the universe. We give an efficient revision algorithm in the model of learning with equivalence and membership queries for threshold functions, and some negative results showing, for instance, that threshold functions cannot be revised efficiently from either type of query alone. The algorithms work in a general revision model where both deletion and addition type revision operators are allowed.  相似文献   
6.
The Design of Discrimination Experiments   总被引:1,自引:0,他引:1  
Experimentation plays a fundamental role in scientific discovery. Scientists experiment to gather data, investigate phenomena, measure quantities, and test theories. In this article, we address the problem of designing experiments to discriminate between two completing theories. Given an initial situation for which the two theories make the same prediction, the experiment design problem is to determine how to modify the situation such that the two theories make different predictions for the modified situation. The modified situation is called a discrimination experiment. We present a knowledge-intensive method called DEED for designing discrimination experiments. The method analyzes the differences in the two theories' explanations of the prediction for the initial situation. Based on this analysis, it determines modifications to the initial situation that will result in a discrimination experiment. We illustrate the method with the design of experiments to discriminate between several pairs of qualitative theories in the fluids domain.  相似文献   
7.
This article is based on experiences with data and text mining to gain information for strategic business decisions, using host-based analysis and visualisation (A/V), primarily in the field of patents. The potential advantages of host-based A/V are pointed out and the features of the first such A/V software, STN®AnaVist™, are described in detail. Areas covered include the user interfaces, initial set of documents for A/V, data mining, text mining, reporting, and suggestions for further development.  相似文献   
8.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
9.
《石油沥青纸胎油毡》国家标准进行了第三次修订。本文介绍了修订的理由、修订的内容与依据,并与国内外同类产品标准进行了比较。  相似文献   
10.
文本索引词项相对权重计算方法与应用   总被引:4,自引:0,他引:4  
文本索引词权重计算方法决定了文本分类的准确率。该文提出一种文本索引词项相对权重计算方法,即文本索引词项权重根据索引词项在该文本中的出现频率与在整个文本空间出现的平均频率之间的相对值进行计算。该方法能有效地提高索引词对文本内容识别的准确性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号