首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   71509篇
  免费   9601篇
  国内免费   7688篇
电工技术   5761篇
技术理论   14篇
综合类   7617篇
化学工业   1963篇
金属工艺   850篇
机械仪表   4100篇
建筑科学   3323篇
矿业工程   2228篇
能源动力   941篇
轻工业   1336篇
水利工程   1703篇
石油天然气   3426篇
武器工业   871篇
无线电   10514篇
一般工业技术   3644篇
冶金工业   1546篇
原子能技术   798篇
自动化技术   38163篇
  2024年   426篇
  2023年   1215篇
  2022年   2382篇
  2021年   2957篇
  2020年   3030篇
  2019年   2364篇
  2018年   2148篇
  2017年   2582篇
  2016年   2820篇
  2015年   3195篇
  2014年   4963篇
  2013年   4430篇
  2012年   5239篇
  2011年   5696篇
  2010年   4372篇
  2009年   4315篇
  2008年   4766篇
  2007年   5438篇
  2006年   4542篇
  2005年   4009篇
  2004年   3475篇
  2003年   2926篇
  2002年   2299篇
  2001年   1726篇
  2000年   1474篇
  1999年   1111篇
  1998年   837篇
  1997年   688篇
  1996年   593篇
  1995年   521篇
  1994年   417篇
  1993年   294篇
  1992年   203篇
  1991年   206篇
  1990年   158篇
  1989年   126篇
  1988年   99篇
  1987年   82篇
  1986年   76篇
  1985年   101篇
  1984年   74篇
  1983年   91篇
  1982年   69篇
  1981年   48篇
  1980年   28篇
  1979年   39篇
  1978年   20篇
  1977年   26篇
  1976年   15篇
  1974年   9篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
随机森林理论浅析   总被引:5,自引:0,他引:5  
随机森林是一种著名的集成学习方法,被广泛应用于数据分类和非参数回归。本文对随机森林算法的主要理论进行阐述,包括随机森林收敛定理、泛化误差界以和袋外估计三个部分。最后介绍一种属性加权子空间抽样的随机森林改进算法,用于解决超高维数据的分类问题。  相似文献   
62.
信息总量、网络浏览人数以及人们对网络享受需求的增加,促使着信息制作水平的提高,制作水平的提高在给人们带来了更好的视觉和听觉享受的同时,也对计算机的硬件系统提出了巨大的考验.计算机内存作为计算机硬件系统的重要组成部分,能否对内存做好合理的划分,对计算机的体验具有重要的意义.  相似文献   
63.
围绕对低碳能效型水利数据中心的认识与理解,从数据中心基础设施规划与建设的角度,对基本内涵、设计理念、实施方法、最佳实践、核心技术等展开分析和讨论,以期在新建或改造水利数据中心的过程中发挥一定作用,满足数据中心建设绿色环保、节能减排的要求,顺应未来新一代数据中心发展的趋势。  相似文献   
64.
对软件开发框架模型提出了一种新的构化的分析方法,同时介绍了常用的结构化分析的工具及其更实用情况。  相似文献   
65.
本文主要讨论了异常数据挖掘在实际中的应用,简要地介绍了异常点的定义、分类及异常数据挖掘的定义、功能等,详细的介绍了异常数据挖掘技术在金融领域、电信领域、商业领域中和生物医学以及DNA分析中的应用。  相似文献   
66.
针对互动媒体网站建设要求,利用ASP.NET2.0技术设计开发互动多媒体学习社区网站,本文对实现网上教学以及多媒体学习进行探讨。并详细介绍互动多媒体学习社区网站的主要功能设计和核心代码。  相似文献   
67.
Share price trends can be recognized by using data clustering methods. However, the accuracy of these methods may be rather low. This paper presents a novel supervised classification scheme for the recognition and prediction of share price trends. We first produce a smooth time series using zero-phase filtering and singular spectrum analysis from the original share price data. We train pattern classifiers using the classification results of both original and filtered time series and then use these classifiers to predict the future share price trends. Experiment results obtained from both synthetic data and real share prices show that the proposed method is effective and outperforms the well-known K-means clustering algorithm.  相似文献   
68.
An automated method was developed for mapping forest cover change using satellite remote sensing data sets. This multi-temporal classification method consists of a training data automation (TDA) procedure and uses the advanced support vector machines (SVM) algorithm. The TDA procedure automatically generates training data using input satellite images and existing land cover products. The derived high quality training data allow the SVM to produce reliable forest cover change products. This approach was tested in 19 study areas selected from major forest biomes across the globe. In each area a forest cover change map was produced using a pair of Landsat images acquired around 1990 and 2000. High resolution IKONOS images and independently developed reference data sets were available for evaluating the derived change products in 7 of those areas. The overall accuracy values were over 90% for 5 areas, and were 89.4% and 89.6% for the remaining two areas. The user's and producer's accuracies of the forest loss class were over 80% for all 7 study areas, demonstrating that this method is especially effective for mapping major disturbances with low commission errors. IKONOS images were also available in the remaining 12 study areas but they were either located in non-forest areas or in forest areas that did not experience forest cover change between 1990 and 2000. For those areas the IKONOS images were used to assist visual interpretation of the Landsat images in assessing the derived change products. This visual assessment revealed that for most of those areas the derived change products likely were as reliable as those in the 7 areas where accuracy assessment was conducted. The results also suggest that images acquired during leaf-off seasons should not be used in forest cover change analysis in areas where deciduous forests exist. Being highly automatic and with demonstrated capability to produce reliable change products, the TDA-SVM method should be especially useful for quantifying forest cover change over large areas.  相似文献   
69.
Large-scale simulation of separation phenomena in solids such as fracture, branching, and fragmentation requires a scalable data structure representation of the evolving model. Modeling of such phenomena can be successfully accomplished by means of cohesive models of fracture, which are versatile and effective tools for computational analysis. A common approach to insert cohesive elements in finite element meshes consists of adding discrete special interfaces (cohesive elements) between bulk elements. The insertion of cohesive elements along bulk element interfaces for fragmentation simulation imposes changes in the topology of the mesh. This paper presents a unified topology-based framework for supporting adaptive fragmentation simulations, being able to handle two- and three-dimensional models, with finite elements of any order. We represent the finite element model using a compact and “complete” topological data structure, which is capable of retrieving all adjacency relationships needed for the simulation. Moreover, we introduce a new topology-based algorithm that systematically classifies fractured facets (i.e., facets along which fracture has occurred). The algorithm follows a set of procedures that consistently perform all the topological changes needed to update the model. The proposed topology-based framework is general and ensures that the model representation remains always valid during fragmentation, even when very complex crack patterns are involved. The framework correctness and efficiency are illustrated by arbitrary insertion of cohesive elements in various finite element meshes of self-similar geometries, including both two- and three-dimensional models. These computational tests clearly show linear scaling in time, which is a key feature of the present data-structure representation. The effectiveness of the proposed approach is also demonstrated by dynamic fracture analysis through finite element simulations of actual engineering problems.
Glaucio H. PaulinoEmail:
  相似文献   
70.
If the production process, production equipment, or material changes, it becomes necessary to execute pilot runs before mass production in manufacturing systems. Using the limited data obtained from pilot runs to shorten the lead time to predict future production is this worthy of study. Although, artificial neural networks are widely utilized to extract management knowledge from acquired data, sufficient training data is the fundamental assumption. Unfortunately, this is often not achievable for pilot runs because there are few data obtained during trial stages and theoretically this means that the knowledge obtained is fragile. The purpose of this research is to utilize bootstrap to generate virtual samples to fill the information gaps of sparse data. The results of this research indicate that the prediction error rate can be significantly decreased by applying the proposed method to a very small data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号