首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12312篇
  免费   2933篇
  国内免费   2796篇
电工技术   956篇
技术理论   3篇
综合类   1479篇
化学工业   237篇
金属工艺   98篇
机械仪表   472篇
建筑科学   222篇
矿业工程   148篇
能源动力   267篇
轻工业   219篇
水利工程   122篇
石油天然气   196篇
武器工业   74篇
无线电   1668篇
一般工业技术   564篇
冶金工业   172篇
原子能技术   26篇
自动化技术   11118篇
  2024年   108篇
  2023年   325篇
  2022年   657篇
  2021年   716篇
  2020年   751篇
  2019年   620篇
  2018年   616篇
  2017年   695篇
  2016年   830篇
  2015年   864篇
  2014年   1033篇
  2013年   890篇
  2012年   1190篇
  2011年   1251篇
  2010年   1059篇
  2009年   1010篇
  2008年   1069篇
  2007年   1101篇
  2006年   749篇
  2005年   601篇
  2004年   468篇
  2003年   327篇
  2002年   257篇
  2001年   171篇
  2000年   136篇
  1999年   101篇
  1998年   74篇
  1997年   59篇
  1996年   46篇
  1995年   49篇
  1994年   32篇
  1993年   29篇
  1992年   20篇
  1991年   13篇
  1990年   19篇
  1989年   16篇
  1988年   9篇
  1987年   10篇
  1986年   7篇
  1985年   9篇
  1984年   11篇
  1983年   10篇
  1982年   4篇
  1981年   5篇
  1980年   3篇
  1979年   4篇
  1977年   5篇
  1976年   2篇
  1974年   2篇
  1957年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE.  相似文献   
2.
Image color clustering is a basic technique in image processing and computer vision, which is often applied in image segmentation, color transfer, contrast enhancement, object detection, skin color capture, and so forth. Various clustering algorithms have been employed for image color clustering in recent years. However, most of the algorithms require a large amount of memory or a predetermined number of clusters. In addition, some of the existing algorithms are sensitive to the parameter configurations. In order to tackle the above problems, we propose an image color clustering method named Student's t-based density peaks clustering with superpixel segmentation (tDPCSS), which can automatically obtain clustering results, without requiring a large amount of memory, and is not dependent on the parameters of the algorithm or the number of clusters. In tDPCSS, superpixels are obtained based on automatic and constrained simple non-iterative clustering, to automatically decrease the image data volume. A Student's t kernel function and a cluster center selection method are adopted to eliminate the dependence of the density peak clustering on parameters and the number of clusters, respectively. The experiments undertaken in this study confirmed that the proposed approach outperforms k-means, fuzzy c-means, mean-shift clustering, and density peak clustering with superpixel segmentation in the accuracy of the cluster centers and the validity of the clustering results.  相似文献   
3.
Process object is the instance of process. Vertexes and edges are in the graph of process object. There are different types of the object itself and the associations between object. For the large-scale data, there are many changes reflected. Recently, how to find appropriate real-time data for process object becomes a hot research topic. Data sampling is a kind of finding c hanges o f p rocess o bjects. There i s r equirements f or s ampling to be adaptive to underlying distribution of data stream. In this paper, we have proposed a adaptive data sampling mechanism to find a ppropriate d ata t o m odeling. F irst o f all, we use concept drift to make the partition of the life cycle of process object. Then, entity community detection is proposed to find changes. Finally, we propose stream-based real-time optimization of data sampling. Contributions of this paper are concept drift, community detection, and stream-based real-time computing. Experiments show the effectiveness and feasibility of our proposed adaptive data sampling mechanism for process object.  相似文献   
4.
相似用户挖掘是提高社交网络服务质量的重要途径,在面向大数据的社交网络时代,准确的相似用户挖掘对于用户和互联网企业等都有重要的意义,而根据用户自己的兴趣话题挖掘的相似用户更符合相似用户的要求。提出了一种基于用户兴趣话题进行相似用户挖掘的方法。该方法首先使用TextRank话题提取方法对用户进行兴趣话题提取,再对用户发表内容进行训练,计算出所有词之间的相似度。提出CP(Corresponding Position similarity)、CPW(Corresponding Position Weighted similarity)、AP(All Position similarity)、APW(All Position Weighted similarity)四种用户兴趣话题词相似度计算方法,通过用户和相似用户间关注、粉丝重合率验证相似用户挖掘效果,APW similarity的相似用户的关注/粉丝重合百分比为1.687%,优于提出的其他三种算法,分别提高了26.3%、2.8%、12.4%,并且比传统的文本相似度方法Jaccard相似度、编辑距离算法、余弦相似度分别提高了20.4%、21.2%、45.0%。因此APW方法可以更加有效地挖掘出用户的相似用户。  相似文献   
5.
针对新闻文本领域,该文提出一种基于查询的自动文本摘要技术,更加有针对性地满足用户信息需求。根据句子的TF-IDF、与查询句的相似度等要素,计算句子权重,并根据句子指示的时间给定不同的时序权重系数,使得最近发生的新闻内容具有更高的权重,最后使用最大边界相关的方法选择摘要句。通过与基于TF-IDF、Text-Rank、LDA等六种方法的对比,该摘要方法ROUGE评测指标上优于其他方法。从结合评测结果及摘要示例可以看出,该文提出的方法可以有效地从新闻文档集中摘取核心信息,满足用户查询内容的信息需求。  相似文献   
6.
双语词嵌入通常采用从源语言空间到目标语言空间映射,通过源语言映射嵌入到目标语言空间的最小距离线性变换实现跨语言词嵌入。然而大型的平行语料难以获得,词嵌入的准确率难以提高。针对语料数量不对等、双语语料稀缺情况下的跨语言词嵌入问题,该文提出一种基于小字典不对等语料的跨语言词嵌入方法,首先对单语词向量进行归一化,对小字典词对正交最优线性变换求得梯度下降初始值,然后通过对大型源语言(英语)语料进行聚类,借助小字典找到与每一聚类簇相对应的源语言词,取聚类得到的每一簇词向量均值和源语言与目标语言对应的词向量均值,建立新的双语词向量对应关系,将新建立的双语词向量扩展到小字典中,使得小字典得以泛化和扩展。最后,利用泛化扩展后的字典对跨语言词嵌入映射模型进行梯度下降求得最优值。在英语—意大利语、德语和芬兰语上进行了实验验证,实验结果证明该文方法可以在跨语言词嵌入中减少梯度下降迭代次数,减少训练时间,同时在跨语言词嵌入上表现出较好的正确率。  相似文献   
7.
8.
针对谱聚类融合模糊C-means(FCM)聚类的蛋白质相互作用(PPI)网络功能模块挖掘方法准确率不高、执行效率较低和易受假阳性影响的问题,提出一种基于模糊谱聚类的不确定PPI网络功能模块挖掘(FSC-FM)方法。首先,构建一个不确定PPI网络模型,使用边聚集系数给每一条蛋白质交互作用赋予一个存在概率测度,克服假阳性对实验结果的影响;第二,利用基于边聚集系数流行距离(FEC)策略改进谱聚类中的相似度计算,解决谱聚类算法对尺度参数敏感的问题,进而利用谱聚类算法对不确定PPI网络数据进行预处理,降低数据的维数,提高聚类的准确率;第三,设计基于密度的概率中心选取策略(DPCS)解决模糊C-means算法对初始聚类中心和聚类数目敏感的问题,并对预处理后的PPI数据进行FCM聚类,提高聚类的执行效率以及灵敏度;最后,采用改进的边期望稠密度(EED)对挖掘出的蛋白质功能模块进行过滤。在酵母菌DIP数据集上运行各个算法可知,FSC-FM与基于不确定图模型的检测蛋白质复合物(DCU)算法相比,F-measure值提高了27.92%,执行效率提高了27.92%;与在动态蛋白质相互作用网络中识别复合物的方法(CDUN)、演化算法(EA)、医学基因或蛋白质预测算法(MGPPA)相比也有更高的F-measure值和执行效率。实验结果表明,在不确定PPI网络中,FSC-FM适合用于功能模块的挖掘。  相似文献   
9.
An organization requires performing readiness-relevant activities to ensure successful implementation of an enterprise resource planning (ERP) system. This paper develops a novel approach to managing these interrelated activities to get ready for implementing an ERP system. The approach enables an organization to evaluate its ERP implementation readiness by assessing the degree to which it can achieve the interrelated readiness relevant activities using fuzzy cognitive maps. Based on the interrelationship degrees among the activities, the approach clusters the activities into manageable groups and prioritizes them. To help work out a readiness improvement plan, scenario analysis is conducted.  相似文献   
10.
Electrocardiogram is the most commonly used tool for the diagnosis of cardiologic diseases. In order to help cardiologists to diagnose the arrhythmias automatically, new methods for automated, computer aided ECG analysis are being developed. In this paper, a Modified Artificial Bee Colony (MABC) algorithm for ECG heart beat classification is introduced. It is applied to ECG data set which is obtained from MITBIH database and the result of MABC is compared with seventeen other classifier's accuracy.In classification problem, some features have higher distinctiveness than others. In this study, in order to find higher distinctive features, a detailed analysis has been done on time domain features. By using the right features in MABC algorithm, high classification success rate (99.30%) is obtained. Other methods generally have high classification accuracy on examined data set, but they have relatively low or even poor sensitivities for some beat types. Different data sets, unbalanced sample numbers in different classes have effect on classification result. When a balanced data set is used, MABC provided the best result as 97.96% among all classifiers.Not only part of the records from examined MITBIH database, but also all data from selected records are used to be able to use developed algorithm on a real time system in the future by using additional software modules and making adaptation on a specific hardware.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号