首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2119篇
  免费   140篇
  国内免费   154篇
电工技术   28篇
综合类   44篇
化学工业   30篇
金属工艺   17篇
机械仪表   51篇
建筑科学   23篇
矿业工程   10篇
能源动力   14篇
轻工业   15篇
水利工程   4篇
石油天然气   11篇
武器工业   6篇
无线电   196篇
一般工业技术   58篇
冶金工业   4篇
原子能技术   2篇
自动化技术   1900篇
  2024年   4篇
  2023年   13篇
  2022年   30篇
  2021年   42篇
  2020年   55篇
  2019年   46篇
  2018年   79篇
  2017年   79篇
  2016年   87篇
  2015年   87篇
  2014年   151篇
  2013年   128篇
  2012年   138篇
  2011年   171篇
  2010年   119篇
  2009年   143篇
  2008年   153篇
  2007年   200篇
  2006年   182篇
  2005年   131篇
  2004年   78篇
  2003年   81篇
  2002年   48篇
  2001年   28篇
  2000年   29篇
  1999年   13篇
  1998年   14篇
  1997年   14篇
  1996年   5篇
  1995年   9篇
  1994年   6篇
  1993年   5篇
  1992年   2篇
  1991年   3篇
  1990年   2篇
  1989年   1篇
  1988年   1篇
  1986年   1篇
  1985年   3篇
  1984年   7篇
  1983年   3篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   4篇
  1978年   4篇
  1976年   2篇
排序方式: 共有2413条查询结果,搜索用时 46 毫秒
101.
随着智能电网建设的推进,ISO7498—2、PPDR等现有安全模型不能很好地指导智能电网体系进行安全防护建设.提出一种新的适用于电网的基于主动立体防御体系的安全模型。该模型有三个维度:安全技术维、安全策略维和安全保障维。三个维度有效地将安全技术、安全策略和安全管理有机结合在一起,充分考虑人、技术、操作三个方面,相互补充、配合,形成一个完整、统一的体系,共同保障电网的安全。  相似文献   
102.
异构计算是高效能计算发展的必然趋势,针对异构计算运行中并行任务和体系结构难匹配的问题,提出了实 现并行任务和体系结构匹配的并行任务分簇方法。首先给出效能的概念及异构计算中体系结构感知的分簇问题,然 后从理论上分析了异构匹配与效能的关系,提出了实现异构计算匹配和结构匹配的分簇理论,目的是发挥异构计算中 机器的潜能,协同处理并行任务,实现高效能。在此基础上,给出相应的算法。最后通过仿真实验说明,该方法可通过 簇图与体系结构的匹配缩短通信开销在执行时间上所占的比例,从而缩短并行执行时间,以提高系统利用率,最终实 现异构计算的高效能。  相似文献   
103.
Physically based rendering of scenes with volumetric illumination of flames remains a challenging problem due to the complexity of their heterogeneous radiative properties. Current bidirectional importance sampling strategies have been focusing on emissive light sources without anisotropic extinction. In this paper, we present an efficient importance sampling method for volumetric light sources with anisotropic extinction. According to the radiative properties of flames, we separate the computation of anisotropic extinction from the evaluation of illumination inside flames and utilize cluster-based hierarchies to rapidly estimate them. To exploit the coherence of radiative voxels, we also propose a new similarity metric to aggregate voxels into clusters. For each pixel to be shaded, we use these clusters to rapidly approximate the importance function of voxels, and draw final illumination samples from clusters. Our results show that this approach substantially reduces the variance of images when rendering scenes with flames.  相似文献   
104.
Cluster validity indices are used to validate results of clustering and to find a set of clusters that best fits natural partitions for given data set. Most of the previous validity indices have been considerably dependent on the number of data objects in clusters, on cluster centroids and on average values. They have a tendency to ignore small clusters and clusters with low density. Two cluster validity indices are proposed for efficient validation of partitions containing clusters that widely differ in sizes and densities. The first proposed index exploits a compactness measure and a separation measure, and the second index is based an overlap measure and a separation measure. The compactness and the overlap measures are calculated from few data objects of a cluster while the separation measure uses all data objects. The compactness measure is calculated only from data objects of a cluster that are far enough away from the cluster centroids, while the overlap measure is calculated from data objects that are enough near to one or more other clusters. A good partition is expected to have low degree of overlap and a larger separation distance and compactness. The maximum value of the ratio of compactness to separation and the minimum value of the ratio of overlap to separation indicate the optimal partition. Testing of both proposed indices on some artificial and three well-known real data sets showed the effectiveness and reliability of the proposed indices.  相似文献   
105.
Clustering of data in an uncertain environment can result into different partitions of the data at different points in time. Therefore, the initial formed clusters of non-stationary data can adapt over time which means that feature vectors associated with different clusters can follow different migration types to and from other clusters. This paper investigates different data migration types and proposes a technique to generate artificial non-stationary data which follows different migration types. Furthermore, the paper proposes clustering performance measures which are more applicable to measure the clustering quality in a non-stationary environment compared to the clustering performance measures for stationary environments. The proposed clustering performance measures in this paper are then used to compare the clustering results of three network based artificial immune models, since the adaptability and self-organising behaviour of the natural immune system inspired the modelling of network based artificial immune models for clustering of non-stationary data.  相似文献   
106.
In a graph theory model, clustering is the process of division of vertices into groups, with a higher density of edges within groups than between them. In this paper, we introduce a new clustering method for detecting such groups and use it to analyse some classic social networks. The new method has two distinguished features: non-binary hierarchical tree and the feature of overlapping clustering. A non-binary hierarchical tree is much smaller than the binary-trees constructed by most traditional methods and, therefore, it clearly highlights meaningful clusters which significantly reduces further manual efforts for cluster selections. The present method is tested by several bench mark data sets for which the community structure was known beforehand and the results indicate that it is a sensitive and accurate method for extracting community structure from social networks.  相似文献   
107.
Almost all subspace clustering algorithms proposed so far are designed for numeric datasets. In this paper, we present a k-means type clustering algorithm that finds clusters in data subspaces in mixed numeric and categorical datasets. In this method, we compute attributes contribution to different clusters. We propose a new cost function for a k-means type algorithm. One of the advantages of this algorithm is its complexity which is linear with respect to the number of the data points. This algorithm is also useful in describing the cluster formation in terms of attributes contribution to different clusters. The algorithm is tested on various synthetic and real datasets to show its effectiveness. The clustering results are explained by using attributes weights in the clusters. The clustering results are also compared with published results.  相似文献   
108.
Document clustering using synthetic cluster prototypes   总被引:3,自引:0,他引:3  
The use of centroids as prototypes for clustering text documents with the k-means family of methods is not always the best choice for representing text clusters due to the high dimensionality, sparsity, and low quality of text data. Especially for the cases where we seek clusters with small number of objects, the use of centroids may lead to poor solutions near the bad initial conditions. To overcome this problem, we propose the idea of synthetic cluster prototype that is computed by first selecting a subset of cluster objects (instances), then computing the representative of these objects and finally selecting important features. In this spirit, we introduce the MedoidKNN synthetic prototype that favors the representation of the dominant class in a cluster. These synthetic cluster prototypes are incorporated into the generic spherical k-means procedure leading to a robust clustering method called k-synthetic prototypes (k-sp). Comparative experimental evaluation demonstrates the robustness of the approach especially for small datasets and clusters overlapping in many dimensions and its superior performance against traditional and subspace clustering methods.  相似文献   
109.
传统的基于真实距离的聚类分析方法不利于地震不同断层破裂传播和愈合速度的精确计算。为提高地震预测精度,提出并建立了基于软距离计算的聚类方法。给出了基于软距离聚类过程、软距离计算方法以及具体的基于软距离计算的聚类算法。以现实的强震样本点作为聚类数据源,采用该聚类方法以及其它传统聚类方法对该样本数据进行聚类分析。分析结果表明,采用该聚类方法获得的聚类中心点更接近地壳应力场演变的客观真实性,该聚类分析方法为地震的断层带下次发生强震的精确计算提供了很好的计算依据。  相似文献   
110.
反洗钱中的一个重要问题是预测可疑账户未来可能发生的交易。马尔科夫模型在股票、商品价格、市场占有率等经济领域的预测中具有广泛的应用,但单一的马尔科夫模型的预测准确性有待提高。提出一种结合数据挖掘中聚类、关联规则和低序马尔科夫模型的混合马尔科夫模型,并在模型的建立过程中基于置信度进行剪枝以降低时间复杂度,最后将该模型用于预测反洗钱领域中账户之间的交易。实验表明,该模型具有较高的预测准确性,并在预测准确性和时间复杂度两者之间取得了较好的平衡。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号