首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1892篇
  免费   174篇
  国内免费   179篇
电工技术   66篇
综合类   105篇
化学工业   24篇
金属工艺   10篇
机械仪表   60篇
建筑科学   30篇
矿业工程   17篇
能源动力   8篇
轻工业   24篇
水利工程   7篇
石油天然气   18篇
武器工业   14篇
无线电   230篇
一般工业技术   75篇
冶金工业   8篇
原子能技术   1篇
自动化技术   1548篇
  2024年   7篇
  2023年   26篇
  2022年   43篇
  2021年   72篇
  2020年   72篇
  2019年   47篇
  2018年   52篇
  2017年   62篇
  2016年   76篇
  2015年   76篇
  2014年   125篇
  2013年   90篇
  2012年   121篇
  2011年   158篇
  2010年   104篇
  2009年   140篇
  2008年   148篇
  2007年   158篇
  2006年   143篇
  2005年   101篇
  2004年   86篇
  2003年   75篇
  2002年   43篇
  2001年   35篇
  2000年   28篇
  1999年   29篇
  1998年   14篇
  1997年   16篇
  1996年   13篇
  1995年   9篇
  1994年   9篇
  1993年   6篇
  1992年   7篇
  1991年   5篇
  1989年   3篇
  1988年   2篇
  1987年   4篇
  1986年   3篇
  1985年   1篇
  1984年   3篇
  1983年   2篇
  1982年   7篇
  1981年   7篇
  1980年   8篇
  1979年   1篇
  1978年   2篇
  1977年   3篇
  1974年   1篇
  1973年   1篇
  1972年   1篇
排序方式: 共有2245条查询结果,搜索用时 15 毫秒
1.
Topic modeling is a popular analytical tool for evaluating data. Numerous methods of topic modeling have been developed which consider many kinds of relationships and restrictions within datasets; however, these methods are not frequently employed. Instead many researchers gravitate to Latent Dirichlet Analysis, which although flexible and adaptive, is not always suited for modeling more complex data relationships. We present different topic modeling approaches capable of dealing with correlation between topics, the changes of topics over time, as well as the ability to handle short texts such as encountered in social media or sparse text data. We also briefly review the algorithms which are used to optimize and infer parameters in topic modeling, which is essential to producing meaningful results regardless of method. We believe this review will encourage more diversity when performing topic modeling and help determine what topic modeling method best suits the user needs.  相似文献   
2.
This paper presents an innovative solution to model distributed adaptive systems in biomedical environments. We present an original TCBR-HMM (Text Case Based Reasoning-Hidden Markov Model) for biomedical text classification based on document content. The main goal is to propose a more effective classifier than current methods in this environment where the model needs to be adapted to new documents in an iterative learning frame. To demonstrate its achievement, we include a set of experiments, which have been performed on OSHUMED corpus. Our classifier is compared with Naive Bayes and SVM techniques, commonly used in text classification tasks. The results suggest that the TCBR-HMM Model is indeed more suitable for document classification. The model is empirically and statistically comparable to the SVM classifier and outperforms it in terms of time efficiency.  相似文献   
3.
This article is based on experiences with data and text mining to gain information for strategic business decisions, using host-based analysis and visualisation (A/V), primarily in the field of patents. The potential advantages of host-based A/V are pointed out and the features of the first such A/V software, STN®AnaVist™, are described in detail. Areas covered include the user interfaces, initial set of documents for A/V, data mining, text mining, reporting, and suggestions for further development.  相似文献   
4.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
5.
文本索引词项相对权重计算方法与应用   总被引:4,自引:0,他引:4  
文本索引词权重计算方法决定了文本分类的准确率。该文提出一种文本索引词项相对权重计算方法,即文本索引词项权重根据索引词项在该文本中的出现频率与在整个文本空间出现的平均频率之间的相对值进行计算。该方法能有效地提高索引词对文本内容识别的准确性。  相似文献   
6.
Inger Lytje 《AI & Society》1996,10(2):142-163
Participatory design strategies have placed the user at the center of the design process, leaving computer software designers without guidelines for how to design and implement the software system. This paper aims to bring the designer back to the center of the design process. and the way of doing it is to consider computer software as text. Three different text theories are presented in order to explain what is meant by text, namely pragmatics, structuralism and deconstructivism. Finally it is discussed how the design processes should be understood, and how they should be organized when taking the text point of view of computer software.  相似文献   
7.
Full-text systems that access text randomly cannot normally determine the format operations in effect for a given target location. The problem can be solved by viewing the format marks as the non-terminals in a format grammar. A formatted text can then be parsed using the grammar to build a data structure that serves both as a parse tree and as a search tree. While processing a retrieved segment, a full-text system can follow the search tree from root to leaf, collecting the format marks encountered at each node to derive the sequence of commands active for that segment. The approach also supports the notion of a ‘well formatted’ document and provides a means for verifying the well-formedness of a given text. To illustrate the approach, a sample set of format marks and a sample grammar are given suitable for formatting and parsing the article as a sample text.  相似文献   
8.
介绍网络挖掘系统的功能模型及其采用的体系结构.并重点阐述系统采用的社团发现技术、文本提取和文本分类等关键技术,并对存在的问题进行分析和总结。  相似文献   
9.
对图像预处理与运动目标检测中的几个关键技术进行了讨论,概述了其研究现状、存在的问题及研究方法,比较和评价了各种算法,并对运动目标检测的研究前景进行了展望。  相似文献   
10.
文章从表面贴装技术(SMT)现状及特点开始,介绍多品种小批量电子产品中使用表面贴装技术存在的问题。针对这一情况对表面贴装过程仿真系统的需求进行了分析,并详细论述了仿真系统的文件预处理、仿真以及数据库维护三大主要模块设计,主要介绍采用OpenGL实现仿真模块的方法,同时展示仿真系统实现结果。最后总结了仿真系统的特点。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号