首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2611篇
  免费   535篇
  国内免费   430篇
电工技术   183篇
综合类   254篇
化学工业   128篇
金属工艺   28篇
机械仪表   152篇
建筑科学   17篇
矿业工程   77篇
能源动力   23篇
轻工业   33篇
水利工程   7篇
石油天然气   26篇
武器工业   25篇
无线电   597篇
一般工业技术   152篇
冶金工业   21篇
原子能技术   50篇
自动化技术   1803篇
  2024年   3篇
  2023年   55篇
  2022年   88篇
  2021年   109篇
  2020年   116篇
  2019年   94篇
  2018年   86篇
  2017年   132篇
  2016年   124篇
  2015年   171篇
  2014年   246篇
  2013年   209篇
  2012年   244篇
  2011年   310篇
  2010年   187篇
  2009年   192篇
  2008年   226篇
  2007年   219篇
  2006年   202篇
  2005年   151篇
  2004年   100篇
  2003年   62篇
  2002年   60篇
  2001年   38篇
  2000年   23篇
  1999年   30篇
  1998年   21篇
  1997年   15篇
  1996年   10篇
  1995年   14篇
  1994年   4篇
  1993年   4篇
  1991年   1篇
  1990年   3篇
  1989年   6篇
  1988年   4篇
  1987年   1篇
  1985年   1篇
  1984年   4篇
  1983年   6篇
  1981年   1篇
  1980年   3篇
  1977年   1篇
排序方式: 共有3576条查询结果,搜索用时 15 毫秒
1.
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE.  相似文献   
2.
1 IntroductionOneofthekeyproblemsinhigh speeddigitalradiocommunicationistheInter Symbol Interfer ence (ISI)causedbythemulti pathbetweenthetransmitterandthereceiver.Becauseoftherelativemovementbetweenthereceiverandthetransmitterandthecontinuouschangeoftransmissionmedia,ISIisusuallytime variant.Inthepasttwotothreedecades,abroadstudyofthetype ,structureandadaptivealgorithmofadaptiveequalizershasbeencarriedouttomitigateISI[1~ 7] .Withtheintroductionofadaptiveantennasanddiversityreceptiontec…  相似文献   
3.
Automated currency validation requires a decision to be made regarding the authenticity of a banknote presented to the validation system. This decision often has to be made with little or no information regarding the characteristics of possible counterfeits as is the case for issues of new currency. A method for automated currency validation is presented which segments the whole banknote into different regions, builds individual classifiers on each region and then combines a small subset of the region specific classifiers to provide an overall decision. The segmentation and combination of region specific classifiers to provide optimized false positive and false negative rates is achieved by employing a genetic algorithm. Experiments based on high value notes of Sterling currency were carried out to assess the effectiveness of the proposed solution.  相似文献   
4.
A novel successive learning algorithm based on a Test Feature Classifier is proposed for efficient handling of sequentially provided training data. The fundamental characteristics of the successive learning are considered. In the learning, after recognition of a set of unknown data by a classifier, they are fed into the classifier in order to obtain a modified performance. An efficient algorithm is proposed for the incremental definition of prime tests which are irreducible combinations of features and capable of classifying training patterns into correct classes. Four strategies for addition of training patterns are investigated with respect to their precision and performance using real pattern data. A real-world problem of classification of defects on wafer images has been dealt with by the proposed classifier, obtaining excellent performance even through efficient addition strategies.  相似文献   
5.
谢小帆  王菊霞 《南方金属》2004,(4):35-37,42
介绍了韶钢第五轧钢厂自动分槽器的设计与应用.  相似文献   
6.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
7.
一种基于中心矩特征的SAR图像目标识别方法   总被引:2,自引:0,他引:2  
合成孔径雷达自动目标识别是目前国内外模式识别领域的重点研究课题之一.本文给出了一种内存需求小,低计算复杂度且具有较好识别性能的SAR图像目标识别方法,先通过自适应阈值分割来获得目标图像,然后提取其中心矩特征,采用SVM来进行识别.基于美国MSTAR实测数据的识别试验验证了该方法的有效性.  相似文献   
8.
介绍了船用堆燃料元件破损监测中较常用的两种方法.通过分析其缺陷,提出了用NaI多道脉冲幅度分析系统测量特征核素(131I、137Cs)的方法,有效避免了监测中的干扰因素的影响,降低了定量监测中的误报率,提高了燃料元件破损监测的效率和置信度.  相似文献   
9.
植物抗性基因识别中的随机森林分类方法   总被引:2,自引:0,他引:2  
为了解决传统基于同源序列比对的抗性基因识别方法中假阳性高、无法发现新的抗性基因的问题,提出了一种利用随机森林分类器和K-Means聚类降采样方法的抗性基因识别算法。针对目前研究工作中挖掘盲目性大的问题,进行两点改进:引入了随机森林分类器和188维组合特征来进行抗性基因识别,这种基于样本统计学习的方法能够有效地捕捉抗性基因内在特性;对于训练过程中存在的严重类别不平衡现象,使用基于聚类的降采样方法得到了更具代表性的训练集,进一步降低了识别误差。实验结果表明,该算法可以有效地进行抗性基因的识别工作,能够对现有实验验证数据进行准确的分类,并在反例集上也获得了较高的精度。  相似文献   
10.
The rapid development of compressive sensing(CS)shows that it is possible to recover a sparse signal from very limited measurements.Synthetic aperture radar(SAR)imaging based on CS can reconstruct the target scene with a reduced number of collected samples by solving an optimization problem.For multi-channel SAR imaging based on CS,each channel requires sufficient samples for separate imaging and the total number of samples could still be large.We propose an imaging algorithm based on distributed compressive sensing(DCS)that reconstructs scenes jointly under multiple channels.Multi-channel SAR imaging based on DCS not only exploits the sparsity of the target scene,but also exploits the correlation among channels.It requires significantly fewer samples than multi-channel SAR imaging based on CS.If multiple channels offer different sampling rates,DCS joint processing can reconstruct target scenes with a much more flexible allocation of the number of measurements offered by each channel than that used in separate CS processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号