首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10404篇
  免费   2442篇
  国内免费   2286篇
电工技术   930篇
技术理论   2篇
综合类   1272篇
化学工业   225篇
金属工艺   93篇
机械仪表   460篇
建筑科学   403篇
矿业工程   153篇
能源动力   142篇
轻工业   189篇
水利工程   137篇
石油天然气   133篇
武器工业   79篇
无线电   1472篇
一般工业技术   449篇
冶金工业   249篇
原子能技术   23篇
自动化技术   8721篇
  2024年   80篇
  2023年   255篇
  2022年   445篇
  2021年   517篇
  2020年   584篇
  2019年   505篇
  2018年   512篇
  2017年   551篇
  2016年   605篇
  2015年   666篇
  2014年   800篇
  2013年   775篇
  2012年   1029篇
  2011年   1128篇
  2010年   938篇
  2009年   895篇
  2008年   972篇
  2007年   1020篇
  2006年   712篇
  2005年   527篇
  2004年   406篇
  2003年   299篇
  2002年   213篇
  2001年   155篇
  2000年   128篇
  1999年   95篇
  1998年   50篇
  1997年   43篇
  1996年   38篇
  1995年   27篇
  1994年   24篇
  1993年   13篇
  1992年   6篇
  1991年   8篇
  1990年   10篇
  1989年   10篇
  1988年   8篇
  1987年   7篇
  1985年   8篇
  1984年   9篇
  1983年   5篇
  1981年   3篇
  1980年   3篇
  1979年   4篇
  1977年   3篇
  1965年   4篇
  1964年   5篇
  1962年   4篇
  1959年   3篇
  1956年   4篇
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
1.
Image color clustering is a basic technique in image processing and computer vision, which is often applied in image segmentation, color transfer, contrast enhancement, object detection, skin color capture, and so forth. Various clustering algorithms have been employed for image color clustering in recent years. However, most of the algorithms require a large amount of memory or a predetermined number of clusters. In addition, some of the existing algorithms are sensitive to the parameter configurations. In order to tackle the above problems, we propose an image color clustering method named Student's t-based density peaks clustering with superpixel segmentation (tDPCSS), which can automatically obtain clustering results, without requiring a large amount of memory, and is not dependent on the parameters of the algorithm or the number of clusters. In tDPCSS, superpixels are obtained based on automatic and constrained simple non-iterative clustering, to automatically decrease the image data volume. A Student's t kernel function and a cluster center selection method are adopted to eliminate the dependence of the density peak clustering on parameters and the number of clusters, respectively. The experiments undertaken in this study confirmed that the proposed approach outperforms k-means, fuzzy c-means, mean-shift clustering, and density peak clustering with superpixel segmentation in the accuracy of the cluster centers and the validity of the clustering results.  相似文献   
2.
Process object is the instance of process. Vertexes and edges are in the graph of process object. There are different types of the object itself and the associations between object. For the large-scale data, there are many changes reflected. Recently, how to find appropriate real-time data for process object becomes a hot research topic. Data sampling is a kind of finding c hanges o f p rocess o bjects. There i s r equirements f or s ampling to be adaptive to underlying distribution of data stream. In this paper, we have proposed a adaptive data sampling mechanism to find a ppropriate d ata t o m odeling. F irst o f all, we use concept drift to make the partition of the life cycle of process object. Then, entity community detection is proposed to find changes. Finally, we propose stream-based real-time optimization of data sampling. Contributions of this paper are concept drift, community detection, and stream-based real-time computing. Experiments show the effectiveness and feasibility of our proposed adaptive data sampling mechanism for process object.  相似文献   
3.
双语词嵌入通常采用从源语言空间到目标语言空间映射,通过源语言映射嵌入到目标语言空间的最小距离线性变换实现跨语言词嵌入。然而大型的平行语料难以获得,词嵌入的准确率难以提高。针对语料数量不对等、双语语料稀缺情况下的跨语言词嵌入问题,该文提出一种基于小字典不对等语料的跨语言词嵌入方法,首先对单语词向量进行归一化,对小字典词对正交最优线性变换求得梯度下降初始值,然后通过对大型源语言(英语)语料进行聚类,借助小字典找到与每一聚类簇相对应的源语言词,取聚类得到的每一簇词向量均值和源语言与目标语言对应的词向量均值,建立新的双语词向量对应关系,将新建立的双语词向量扩展到小字典中,使得小字典得以泛化和扩展。最后,利用泛化扩展后的字典对跨语言词嵌入映射模型进行梯度下降求得最优值。在英语—意大利语、德语和芬兰语上进行了实验验证,实验结果证明该文方法可以在跨语言词嵌入中减少梯度下降迭代次数,减少训练时间,同时在跨语言词嵌入上表现出较好的正确率。  相似文献   
4.
5.
针对谱聚类融合模糊C-means(FCM)聚类的蛋白质相互作用(PPI)网络功能模块挖掘方法准确率不高、执行效率较低和易受假阳性影响的问题,提出一种基于模糊谱聚类的不确定PPI网络功能模块挖掘(FSC-FM)方法。首先,构建一个不确定PPI网络模型,使用边聚集系数给每一条蛋白质交互作用赋予一个存在概率测度,克服假阳性对实验结果的影响;第二,利用基于边聚集系数流行距离(FEC)策略改进谱聚类中的相似度计算,解决谱聚类算法对尺度参数敏感的问题,进而利用谱聚类算法对不确定PPI网络数据进行预处理,降低数据的维数,提高聚类的准确率;第三,设计基于密度的概率中心选取策略(DPCS)解决模糊C-means算法对初始聚类中心和聚类数目敏感的问题,并对预处理后的PPI数据进行FCM聚类,提高聚类的执行效率以及灵敏度;最后,采用改进的边期望稠密度(EED)对挖掘出的蛋白质功能模块进行过滤。在酵母菌DIP数据集上运行各个算法可知,FSC-FM与基于不确定图模型的检测蛋白质复合物(DCU)算法相比,F-measure值提高了27.92%,执行效率提高了27.92%;与在动态蛋白质相互作用网络中识别复合物的方法(CDUN)、演化算法(EA)、医学基因或蛋白质预测算法(MGPPA)相比也有更高的F-measure值和执行效率。实验结果表明,在不确定PPI网络中,FSC-FM适合用于功能模块的挖掘。  相似文献   
6.
An organization requires performing readiness-relevant activities to ensure successful implementation of an enterprise resource planning (ERP) system. This paper develops a novel approach to managing these interrelated activities to get ready for implementing an ERP system. The approach enables an organization to evaluate its ERP implementation readiness by assessing the degree to which it can achieve the interrelated readiness relevant activities using fuzzy cognitive maps. Based on the interrelationship degrees among the activities, the approach clusters the activities into manageable groups and prioritizes them. To help work out a readiness improvement plan, scenario analysis is conducted.  相似文献   
7.
Based on the multi-item Food Choice Questionnaire (FCQ) originally developed by Steptoe and colleagues (1995), the current study developed a single-item FCQ that provides an acceptable balance between practical needs and psychometric concerns. Studies 1 (N = 1851) and 2 (2a (N = 3290), 2b (N = 4723), 2c (N = 270)) showed that the single-item FCQ scale has good convergent and discriminant validity. Generally, the results showed the highest correlations with the related multi-item dimensions (>0.40). Study 2 refined the scale. Only the items for convenience (Study 2a), sensory appeal (Study 2b) and mood (Study 2c) needed to be revised (as Study 1 showed a correlation between the multi-item and the single-item below the threshold of 0.60). The results also showed comparable predictive validity. Both methods revealed similar association patterns between food motives and consumption behaviours (Fisher’s z tests revealed agreements of 86.2% for Study 1, 92.9% for Study 2a and 100% for Studies 2b and 2c). Study 3 (N = 6062) showed an example of the added value of a context-specific application for the single-item FCQ. Different motives were shown to be relevant across contexts, and the context-specific motives had additional explained variance beyond the general multi-item FCQ. Studies 2b and 3 also showed the performance of the single-item FCQ in an international context. In sum, the results indicate that the single-item FCQ can be used as a flexible and short substitute for the multi-item FCQ. The study also discusses the conditions that should be considered when using the single-item scale.  相似文献   
8.
Electrocardiogram is the most commonly used tool for the diagnosis of cardiologic diseases. In order to help cardiologists to diagnose the arrhythmias automatically, new methods for automated, computer aided ECG analysis are being developed. In this paper, a Modified Artificial Bee Colony (MABC) algorithm for ECG heart beat classification is introduced. It is applied to ECG data set which is obtained from MITBIH database and the result of MABC is compared with seventeen other classifier's accuracy.In classification problem, some features have higher distinctiveness than others. In this study, in order to find higher distinctive features, a detailed analysis has been done on time domain features. By using the right features in MABC algorithm, high classification success rate (99.30%) is obtained. Other methods generally have high classification accuracy on examined data set, but they have relatively low or even poor sensitivities for some beat types. Different data sets, unbalanced sample numbers in different classes have effect on classification result. When a balanced data set is used, MABC provided the best result as 97.96% among all classifiers.Not only part of the records from examined MITBIH database, but also all data from selected records are used to be able to use developed algorithm on a real time system in the future by using additional software modules and making adaptation on a specific hardware.  相似文献   
9.
This study addresses the problem of choosing the most suitable probabilistic model selection criterion for unsupervised learning of visual context of a dynamic scene using mixture models. A rectified Bayesian Information Criterion (BICr) and a Completed Likelihood Akaike’s Information Criterion (CL-AIC) are formulated to estimate the optimal model order (complexity) for a given visual scene. Both criteria are designed to overcome poor model selection by existing popular criteria when the data sample size varies from small to large and the true mixture distribution kernel functions differ from the assumed ones. Extensive experiments on learning visual context for dynamic scene modelling are carried out to demonstrate the effectiveness of BICr and CL-AIC, compared to that of existing popular model selection criteria including BIC, AIC and Integrated Completed Likelihood (ICL). Our study suggests that for learning visual context using a mixture model, BICr is the most appropriate criterion given sparse data, while CL-AIC should be chosen given moderate or large data sample sizes.  相似文献   
10.
In this paper, a new approach for fault detection and isolation that is based on the possibilistic clustering algorithm is proposed. Fault detection and isolation (FDI) is shown here to be a pattern classification problem, which can be solved using clustering and classification techniques. A possibilistic clustering based approach is proposed here to address some of the shortcomings of the fuzzy c-means (FCM) algorithm. The probabilistic constraint imposed on the membership value in the FCM algorithm is relaxed in the possibilistic clustering algorithm. Because of this relaxation, the possibilistic approach is shown in this paper to give more consistent results in the context of the FDI tasks. The possibilistic clustering approach has also been used to detect novel fault scenarios, for which the data was not available while training. Fault signatures that change as a function of the fault intensities are represented as fault lines, which have been shown to be useful to classify faults that can manifest with different intensities. The proposed approach has been validated here through simulations involving a benchmark quadruple tank process and also through experimental case studies on the same setup. For large scale systems, it is proposed to use the possibilistic clustering based approach in the lower dimensional approximations generated by algorithms such as PCA. Towards this end, finally, we also demonstrate the key merits of the algorithm for plant wide monitoring study using a simulation of the benchmark Tennessee Eastman problem.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号