首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1736篇
  免费   370篇
  国内免费   329篇
电工技术   94篇
综合类   188篇
化学工业   99篇
金属工艺   20篇
机械仪表   78篇
建筑科学   8篇
矿业工程   63篇
能源动力   18篇
轻工业   17篇
水利工程   3篇
石油天然气   5篇
武器工业   10篇
无线电   262篇
一般工业技术   115篇
冶金工业   18篇
原子能技术   1篇
自动化技术   1436篇
  2024年   1篇
  2023年   35篇
  2022年   48篇
  2021年   64篇
  2020年   80篇
  2019年   66篇
  2018年   66篇
  2017年   90篇
  2016年   81篇
  2015年   119篇
  2014年   140篇
  2013年   137篇
  2012年   153篇
  2011年   181篇
  2010年   120篇
  2009年   138篇
  2008年   154篇
  2007年   169篇
  2006年   137篇
  2005年   116篇
  2004年   78篇
  2003年   48篇
  2002年   50篇
  2001年   28篇
  2000年   22篇
  1999年   29篇
  1998年   20篇
  1997年   11篇
  1996年   10篇
  1995年   10篇
  1994年   2篇
  1993年   3篇
  1991年   1篇
  1990年   3篇
  1989年   6篇
  1988年   4篇
  1987年   1篇
  1984年   4篇
  1983年   5篇
  1981年   1篇
  1980年   3篇
  1977年   1篇
排序方式: 共有2435条查询结果,搜索用时 15 毫秒
1.
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE.  相似文献   
2.
Automated currency validation requires a decision to be made regarding the authenticity of a banknote presented to the validation system. This decision often has to be made with little or no information regarding the characteristics of possible counterfeits as is the case for issues of new currency. A method for automated currency validation is presented which segments the whole banknote into different regions, builds individual classifiers on each region and then combines a small subset of the region specific classifiers to provide an overall decision. The segmentation and combination of region specific classifiers to provide optimized false positive and false negative rates is achieved by employing a genetic algorithm. Experiments based on high value notes of Sterling currency were carried out to assess the effectiveness of the proposed solution.  相似文献   
3.
A novel successive learning algorithm based on a Test Feature Classifier is proposed for efficient handling of sequentially provided training data. The fundamental characteristics of the successive learning are considered. In the learning, after recognition of a set of unknown data by a classifier, they are fed into the classifier in order to obtain a modified performance. An efficient algorithm is proposed for the incremental definition of prime tests which are irreducible combinations of features and capable of classifying training patterns into correct classes. Four strategies for addition of training patterns are investigated with respect to their precision and performance using real pattern data. A real-world problem of classification of defects on wafer images has been dealt with by the proposed classifier, obtaining excellent performance even through efficient addition strategies.  相似文献   
4.
谢小帆  王菊霞 《南方金属》2004,(4):35-37,42
介绍了韶钢第五轧钢厂自动分槽器的设计与应用.  相似文献   
5.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
6.
一种基于中心矩特征的SAR图像目标识别方法   总被引:2,自引:0,他引:2  
合成孔径雷达自动目标识别是目前国内外模式识别领域的重点研究课题之一.本文给出了一种内存需求小,低计算复杂度且具有较好识别性能的SAR图像目标识别方法,先通过自适应阈值分割来获得目标图像,然后提取其中心矩特征,采用SVM来进行识别.基于美国MSTAR实测数据的识别试验验证了该方法的有效性.  相似文献   
7.
植物抗性基因识别中的随机森林分类方法   总被引:2,自引:0,他引:2  
为了解决传统基于同源序列比对的抗性基因识别方法中假阳性高、无法发现新的抗性基因的问题,提出了一种利用随机森林分类器和K-Means聚类降采样方法的抗性基因识别算法。针对目前研究工作中挖掘盲目性大的问题,进行两点改进:引入了随机森林分类器和188维组合特征来进行抗性基因识别,这种基于样本统计学习的方法能够有效地捕捉抗性基因内在特性;对于训练过程中存在的严重类别不平衡现象,使用基于聚类的降采样方法得到了更具代表性的训练集,进一步降低了识别误差。实验结果表明,该算法可以有效地进行抗性基因的识别工作,能够对现有实验验证数据进行准确的分类,并在反例集上也获得了较高的精度。  相似文献   
8.
随着互联网技术的发展和安全形势的变化,恶意软件的数量呈指数级增长,恶意软件的变种更是层出不穷,传统的鉴别方法已经不能及时有效的处理这种海量数据,这使得以客户端为战场的传统查杀与防御模式不能适应新的安全需求,各大安全厂商开始构建各自的"云安全"计划。在这种大背景下,研究恶意软件检测关键技术是非常必要的。针对恶意软件数量大、变化快、维度高与干扰多的问题,我们研究云计算环境下的软件行为鉴别技术,探讨海量软件样本数据挖掘新方法、事件序列簇类模式挖掘新模型和算法及在恶意软件鉴别中的应用,并构建面向云安全的恶意软件智能鉴别系统原型以及中文钓鱼网站检测系统架构。  相似文献   
9.
陈令 《信息网络安全》2012,(6):26-28,43
文章根据一类图片验证码的字符颜色、大小、字符间位置关系,使用粗糙集的方法将验证码图片中的字符分割出来,再使用AdaBoost算法进行训练,将分割得到的字符识别出来。经实验证明,该算法对该类彩色验证码,无需很高的训练样本,即具有很高的识别率和速率,基本可以满足实时应用、、  相似文献   
10.
针对基于遥感影像的水体提取方法存在水体提取不完整和误提的现象,提出了一种基于SPOT-5多光谱影像的矿区塌塘水体提取方法。在利用波段合成增加一个可用波段的基础上对已有的水体提取方法进行适当的改进,并基于决策树分类器和改进后的方法进行矿区水体的四级提取,保证了水体提取的完整性,同时减少了误提率;最后利用实测数据对水体提取的精度进行了评定。试验结果表明,基于决策树分类器的水体提取方法具有较高的精度,能满足矿区实际应用的需要。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号