首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   59801篇
  免费   8840篇
  国内免费   5277篇
电工技术   7685篇
技术理论   2篇
综合类   6039篇
化学工业   5009篇
金属工艺   2140篇
机械仪表   5974篇
建筑科学   2757篇
矿业工程   2059篇
能源动力   3039篇
轻工业   3695篇
水利工程   1591篇
石油天然气   2594篇
武器工业   816篇
无线电   4691篇
一般工业技术   5271篇
冶金工业   1842篇
原子能技术   286篇
自动化技术   18428篇
  2024年   597篇
  2023年   1521篇
  2022年   2596篇
  2021年   2721篇
  2020年   3018篇
  2019年   2571篇
  2018年   2299篇
  2017年   2793篇
  2016年   3052篇
  2015年   3383篇
  2014年   4718篇
  2013年   4537篇
  2012年   5103篇
  2011年   5131篇
  2010年   3522篇
  2009年   3728篇
  2008年   3274篇
  2007年   3669篇
  2006年   3016篇
  2005年   2408篇
  2004年   1953篇
  2003年   1536篇
  2002年   1300篇
  2001年   1046篇
  2000年   875篇
  1999年   584篇
  1998年   561篇
  1997年   473篇
  1996年   373篇
  1995年   324篇
  1994年   262篇
  1993年   197篇
  1992年   168篇
  1991年   139篇
  1990年   120篇
  1989年   99篇
  1988年   62篇
  1987年   25篇
  1986年   25篇
  1985年   14篇
  1984年   16篇
  1983年   18篇
  1982年   18篇
  1981年   8篇
  1980年   15篇
  1979年   13篇
  1978年   6篇
  1977年   5篇
  1959年   4篇
  1951年   12篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
101.
带平衡约束的矩形布局问题源于卫星舱设备布局设计,属于组合优化问题。深度强化学习利用奖赏机制,通过数据训练实现高性能决策优化。针对布局优化问题,提出一种基于深度强化学习的新算法DAR及其扩展算法IDAR。DAR用指针网络输出定位顺序,再利用定位机制给出布局结果,算法的时间复杂度是O(n3);IDAR算法在DAR的基础上引入迭代机制,算法时间复杂度是O(n4),但能给出更好的结果。测试表明DAR算法具有较好的学习能力,用小型布局问题进行求解训练所获得的模型,能有效应用在大型问题上。在两个大规模典型算例的对照实验中,提出算法分别超出和接近目前最优解,具有时间和质量上的优势。  相似文献   
102.
水电站的技术供水系统要根据水电站的基本技术参数及设备要求的技术供水参数进行详细设计和论证,并结合电站所在区域的实际情况,使系统满足机组在各种工况下的正常安全稳定运行。主要阐述米角河一级水电站立轴冲击式水轮发电机组技术供水系统的优化设计,采用一次冷却供水和冷却水二次循环供水相结合的设计方案,避免供水水质变化时停机造成的经济损失,提高了供水系统的运行可靠性,减少了检修维护量,为电站"无人值班,少人值守"创造了必要条件。  相似文献   
103.
以沙特阿美吉赞应急响应中心工程建设为例,阐述项目图纸会审阶段的设计优化处理、施工工艺、材料比选、安装方法等方面的成本控制措施,从而为今后类似工程施工提供参考和借鉴。  相似文献   
104.
Analytical models used for latency estimation of Network-on-Chip (NoC) are not producing reliable accuracy. This makes these analytical models difficult to use in optimization of design space exploration. In this paper, we propose a learning based model using deep neural network (DNN) for latency predictions. Input features for DNN model are collected from analytical model as well as from Booksim simulator. Then this DNN model has been adopted in mapping optimization loop for predicting the best mapping of given application and NoC parameters combination. Our simulations show that using the proposed DNN model, prediction error is less than 12% for both synthetic and application specific traffic. More than 108 times speedup could be achieved using DPSO with DNN model compared to DPSO using Booksim simulator.  相似文献   
105.
Simulation‐based optimization has become an important design tool in microwave engineering. However, using electromagnetic (EM) solvers in the design process is a challenging task, primarily due to a high‐computational cost of an accurate EM simulation. In this article, we present a review of EM‐based design optimization techniques exploiting response‐corrected physically based low‐fidelity models. The surrogate models created through such a correction can be used to yield a reasonable approximation of the optimal design of the computationally expensive structure under consideration (high‐fidelity model). Several approaches using this idea are reviewed including output space mapping, manifold mapping, adaptive response correction, and shape‐preserving response prediction. A common feature of these methods is that they are easy to implement and computationally efficient. Application examples are provided. © 2011 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2012.  相似文献   
106.
This article presents an optimization technique for the design of substrate‐integrated waveguide (SIW) filters using knowledge‐embedded space mapping. An effective coarse model is proposed to represent the SIW filter. The proposed coarse model can be analyzed in the available commercial software ADS. The embedded knowledge includes not only formulas but also extracted design curves, which help to build the mapping between the coarse and fine models. The effectiveness of the proposed method is demonstrated through a design example of a six‐pole SIW filter. © 2012 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2012.  相似文献   
107.
The kernelized fuzzy c-means algorithm uses kernel methods to improve the clustering performance of the well known fuzzy c-means algorithm by mapping a given dataset into a higher dimensional space non-linearly. Thus, the newly obtained dataset is more likely to be linearly seprable. However, to further improve the clustering performance, an optimization method is required to overcome the drawbacks of the traditional algorithms such as, sensitivity to initialization, trapping into local minima and lack of prior knowledge for optimum paramaters of the kernel functions. In this paper, to overcome these drawbacks, a new clustering method based on kernelized fuzzy c-means algorithm and a recently proposed ant based optimization algorithm, hybrid ant colony optimization for continuous domains, is proposed. The proposed method is applied to a dataset which is obtained from MIT–BIH arrhythmia database. The dataset consists of six types of ECG beats including, Normal Beat (N), Premature Ventricular Contraction (PVC), Fusion of Ventricular and Normal Beat (F), Artrial Premature Beat (A), Right Bundle Branch Block Beat (R) and Fusion of Paced and Normal Beat (f). Four time domain features are extracted for each beat type and training and test sets are formed. After several experiments it is observed that the proposed method outperforms the traditional fuzzy c-means and kernelized fuzzy c-means algorithms.  相似文献   
108.
Over the last few decades, many different evolutionary algorithms have been introduced for solving constrained optimization problems. However, due to the variability of problem characteristics, no single algorithm performs consistently over a range of problems. In this paper, instead of introducing another such algorithm, we propose an evolutionary framework that utilizes existing knowledge to make logical changes for better performance. The algorithmic aspects considered here are: the way of using search operators, dealing with feasibility, setting parameters, and refining solutions. The combined impact of such modifications is significant as has been shown by solving two sets of test problems: (i) a set of 24 test problems that were used for the CEC2006 constrained optimization competition and (ii) a second set of 36 test instances introduced for the CEC2010 constrained optimization competition. The results demonstrate that the proposed algorithm shows better performance in comparison to the state-of-the-art algorithms.  相似文献   
109.
In the analysis of time invariant fuzzy time series, fuzzy logic group relationships tables have been generally preferred for determination of fuzzy logic relationships. The reason of this is that it is not need to perform complex matrix operations when these tables are used. On the other hand, when fuzzy logic group relationships tables are exploited, membership values of fuzzy sets are ignored. Thus, in defiance of fuzzy set theory, fuzzy sets’ elements with the highest membership value are only considered. This situation causes information loss and decrease in the explanation power of the model. To deal with these problems, a novel time invariant fuzzy time series forecasting approach is proposed in this study. In the proposed method, membership values in the fuzzy relationship matrix are computed by using particle swarm optimization technique. The method suggested in this study is the first method proposed in the literature in which particle swarm optimization algorithm is used to determine fuzzy relations. In addition, in order to increase forecasting accuracy and make the proposed approach more systematic, the fuzzy c-means clustering method is used for fuzzification of time series in the proposed method. The proposed method is applied to well-known time series to show the forecasting performance of the method. These time series are also analyzed by using some other forecasting methods available in the literature. Then, the results obtained from the proposed method are compared to those produced by the other methods. It is observed that the proposed method gives the most accurate forecasts.  相似文献   
110.
Nowadays, various imitations of natural processes are used to solve challenging optimization problems faster and more accurately. Spin glass based optimization, specifically, has shown strong local search capability and parallel processing. But, spin glasses have a low rate of convergence since they use Monte Carlo simulation techniques such as simulated annealing (SA). Here, we propose two algorithms that combine the long range effect in spin glasses with extremal optimization (EO-SA) and learning automata (LA-SA). Instead of arbitrarily flipping spins at each step, these two strategies aim to choose the next spin and selectively exploiting the optimization landscape. As shown in this paper, this selection strategy can lead to faster rate of convergence and improved performance. The resulting two algorithms are then used to solve portfolio selection problem that is a non-polynomial (NP) complete problem. Comparison of test results indicates that the two algorithms, while being very different in strategy, provide similar performance and reach comparable probability distributions for spin selection. Furthermore, experiments show there is no difference in speed of LA-SA or EO-SA for glasses with fewer spins, but EO-SA responds much better than LA-SA for large glasses. This is confirmed by tests results of five of the world's major stock markets. In the last, the convergence speed is compared to other heuristic methods such as Neural Network (NN), Tabu Search (TS), and Genetic Algorithm (GA) to approve the truthfulness of proposed methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号