首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2013篇
  免费   458篇
  国内免费   407篇
电工技术   112篇
综合类   214篇
化学工业   103篇
金属工艺   24篇
机械仪表   89篇
建筑科学   13篇
矿业工程   65篇
能源动力   20篇
轻工业   20篇
水利工程   4篇
石油天然气   6篇
武器工业   13篇
无线电   306篇
一般工业技术   125篇
冶金工业   23篇
原子能技术   1篇
自动化技术   1740篇
  2024年   1篇
  2023年   45篇
  2022年   60篇
  2021年   79篇
  2020年   98篇
  2019年   93篇
  2018年   86篇
  2017年   107篇
  2016年   108篇
  2015年   142篇
  2014年   173篇
  2013年   174篇
  2012年   186篇
  2011年   203篇
  2010年   141篇
  2009年   165篇
  2008年   178篇
  2007年   196篇
  2006年   153篇
  2005年   124篇
  2004年   82篇
  2003年   53篇
  2002年   54篇
  2001年   29篇
  2000年   22篇
  1999年   29篇
  1998年   20篇
  1997年   16篇
  1996年   9篇
  1995年   11篇
  1994年   2篇
  1993年   3篇
  1991年   1篇
  1990年   3篇
  1989年   6篇
  1988年   4篇
  1987年   3篇
  1985年   1篇
  1984年   4篇
  1983年   7篇
  1981年   3篇
  1980年   3篇
  1977年   1篇
排序方式: 共有2878条查询结果,搜索用时 281 毫秒
1.
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE.  相似文献   
2.
Automated currency validation requires a decision to be made regarding the authenticity of a banknote presented to the validation system. This decision often has to be made with little or no information regarding the characteristics of possible counterfeits as is the case for issues of new currency. A method for automated currency validation is presented which segments the whole banknote into different regions, builds individual classifiers on each region and then combines a small subset of the region specific classifiers to provide an overall decision. The segmentation and combination of region specific classifiers to provide optimized false positive and false negative rates is achieved by employing a genetic algorithm. Experiments based on high value notes of Sterling currency were carried out to assess the effectiveness of the proposed solution.  相似文献   
3.
A novel successive learning algorithm based on a Test Feature Classifier is proposed for efficient handling of sequentially provided training data. The fundamental characteristics of the successive learning are considered. In the learning, after recognition of a set of unknown data by a classifier, they are fed into the classifier in order to obtain a modified performance. An efficient algorithm is proposed for the incremental definition of prime tests which are irreducible combinations of features and capable of classifying training patterns into correct classes. Four strategies for addition of training patterns are investigated with respect to their precision and performance using real pattern data. A real-world problem of classification of defects on wafer images has been dealt with by the proposed classifier, obtaining excellent performance even through efficient addition strategies.  相似文献   
4.
谢小帆  王菊霞 《南方金属》2004,(4):35-37,42
介绍了韶钢第五轧钢厂自动分槽器的设计与应用.  相似文献   
5.
A very simple algorithm for computing all k nearest neighbors in 2-D is presented. The method does not rely on complicated forms of tessellation, it only requires simple data binning for fast range searching. Its applications range from scattered data interpolation to reverse engineering.  相似文献   
6.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
7.
一种基于中心矩特征的SAR图像目标识别方法   总被引:2,自引:0,他引:2  
合成孔径雷达自动目标识别是目前国内外模式识别领域的重点研究课题之一.本文给出了一种内存需求小,低计算复杂度且具有较好识别性能的SAR图像目标识别方法,先通过自适应阈值分割来获得目标图像,然后提取其中心矩特征,采用SVM来进行识别.基于美国MSTAR实测数据的识别试验验证了该方法的有效性.  相似文献   
8.
植物抗性基因识别中的随机森林分类方法   总被引:2,自引:0,他引:2  
为了解决传统基于同源序列比对的抗性基因识别方法中假阳性高、无法发现新的抗性基因的问题,提出了一种利用随机森林分类器和K-Means聚类降采样方法的抗性基因识别算法。针对目前研究工作中挖掘盲目性大的问题,进行两点改进:引入了随机森林分类器和188维组合特征来进行抗性基因识别,这种基于样本统计学习的方法能够有效地捕捉抗性基因内在特性;对于训练过程中存在的严重类别不平衡现象,使用基于聚类的降采样方法得到了更具代表性的训练集,进一步降低了识别误差。实验结果表明,该算法可以有效地进行抗性基因的识别工作,能够对现有实验验证数据进行准确的分类,并在反例集上也获得了较高的精度。  相似文献   
9.
随着互联网技术的发展和安全形势的变化,恶意软件的数量呈指数级增长,恶意软件的变种更是层出不穷,传统的鉴别方法已经不能及时有效的处理这种海量数据,这使得以客户端为战场的传统查杀与防御模式不能适应新的安全需求,各大安全厂商开始构建各自的"云安全"计划。在这种大背景下,研究恶意软件检测关键技术是非常必要的。针对恶意软件数量大、变化快、维度高与干扰多的问题,我们研究云计算环境下的软件行为鉴别技术,探讨海量软件样本数据挖掘新方法、事件序列簇类模式挖掘新模型和算法及在恶意软件鉴别中的应用,并构建面向云安全的恶意软件智能鉴别系统原型以及中文钓鱼网站检测系统架构。  相似文献   
10.
代码异味是由糟糕的代码或设计问题引起的一种软件特征,严重影响了软件系统的可靠性和可维护性.在软件系统中,一段代码元素可能同时受到多种代码异味的影响,使得软件质量明显下降.多标签分类适用该情况,将高共现的多个代码异味置于同一标签组,可以更好地考虑代码异味的相关性,但现有的多标签代码异味检测方法未考虑同一段代码元素中多种代码异味检测顺序的影响.对此,提出了一种基于排序损失的集成分类器链(ensemble of classifier chains,ECC)多标签代码异味检测方法,该方法选择随机森林作为基础分类器并采取多次迭代ECC的方式,以排序损失最小化为目标,选择一个较优的标签序列集,优化代码异味检测顺序问题,模拟其生成机理,检测一段代码元素是否同时存在长方法长参数列表、复杂类消息链或消息链过大类这3组代码异味.实验采用9个评价指标,结果表明所提出的检测方法优于现有的多标签代码异味检测方法,F1平均值达97.16%.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号