首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   901篇
  免费   23篇
  国内免费   6篇
电工技术   13篇
综合类   1篇
化学工业   138篇
金属工艺   20篇
机械仪表   23篇
建筑科学   44篇
能源动力   72篇
轻工业   59篇
水利工程   6篇
石油天然气   1篇
无线电   103篇
一般工业技术   167篇
冶金工业   90篇
原子能技术   19篇
自动化技术   174篇
  2024年   4篇
  2023年   14篇
  2022年   36篇
  2021年   55篇
  2020年   28篇
  2019年   35篇
  2018年   35篇
  2017年   47篇
  2016年   35篇
  2015年   20篇
  2014年   33篇
  2013年   68篇
  2012年   39篇
  2011年   34篇
  2010年   44篇
  2009年   43篇
  2008年   32篇
  2007年   23篇
  2006年   17篇
  2005年   20篇
  2004年   16篇
  2003年   13篇
  2002年   8篇
  2001年   7篇
  2000年   8篇
  1999年   6篇
  1998年   25篇
  1997年   22篇
  1996年   21篇
  1995年   5篇
  1994年   10篇
  1993年   15篇
  1992年   7篇
  1991年   5篇
  1990年   8篇
  1989年   7篇
  1988年   8篇
  1987年   6篇
  1986年   6篇
  1985年   7篇
  1983年   6篇
  1982年   5篇
  1981年   4篇
  1980年   5篇
  1979年   3篇
  1977年   6篇
  1976年   14篇
  1973年   3篇
  1972年   3篇
  1965年   2篇
排序方式: 共有930条查询结果,搜索用时 15 毫秒
91.
Videotex is an interactive information system which provides a variety of services to its users. Examples of such services are information retrieval, software distribution, transaction processing, and message handling. An important aspect of the quality of service experienced by a videotex user is the response time. We consider the use of mixed individual/broadcast delivery to enhance the response time performance. Broadcast delivery is attractive for information retrieval applications where several users may be requesting the same information page, and a single broadcast of this page will satisfy all requests simultaneously. Individual response, however, is required for transaction-oriented services and for the retrieval of confidential information. A queueing model is developed to study the performance of videotex systems under mixed delivery. Analytic results are derived for the mean response time. Numerical examples are presented to show the performance characteristics of mixed delivery, and how it can be used to enhance the response time performance without increasing the processing capacity of the system.  相似文献   
92.
In this paper, we present a software system, OPAS (Optimal Allocation System), that incorporates the optimal allocation policy in the analysis of the time-cost behaviour of parallel computations. OPAS assumes that the underlying system which supports the executions of parallel computations has a finite number of processors, that all the processors have the same speed and that the communication is achieved through a shared memory. OPAS defines the time cost as a function of the input, the algorithm, the data structure, the processor speed, the number of processors and the processing power allocation. In analysing the time cost of a computation, OPAS first uses the optimal allocation policy that we developed previously to determine the amount of processing power each node receives and then derives the computation's time cost. OPAS can evaluate different time-cost behaviours, such as the minimum time cost, the maximum time cost, the average time cost and the time-cost variance. It can also determine the speed-up and efficiency, and plot the time-cost curve and time-cost distribution.  相似文献   
93.
Studies in support of the assessment of aging structural materials in pressurized water reactors are being performed at the Paul Scherrer Institut. To that aim, a state-of-the-art methodology based on applying a CASMO-4/SIMULATE-3/MCNPX calculation scheme has been developed. In the frame of the methodology validation, an investigation is currently reported pertaining to the sensitivity of the calculated results, for a specific reactor pressure vessel scraping test, to the nuclear data used with the Monte Carlo code. Thus, the MCNPX-2.4.0 calculations have been carried out using three different data libraries, based on JEF-2.2, ENDF/B-VI.8 and JENDL-3.3 evaluations, respectively.  相似文献   
94.
Today's digital systems are growing increasingly complex, and are being used in increasingly critical functions. The first premise makes them more prone to contain faults, and the second premise makes their failure less tolerable. This widening gap highlights the need for fault tolerant techniques, which make provisions for reliable operation of digital systems despite the presence and occasional manifestation of faults. In this paper we present a brief comparative survey of fault tolerance as it arises in hardware systems and software systems. We discuss logical models as well as statistical models of fault tolerance, and use these models to analyze design tradeoffs of fault tolerant systems. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   
95.
96.
97.
A scenario-based reliability analysis approach for component-based software   总被引:1,自引:0,他引:1  
This paper introduces a reliability model, and a reliability analysis technique for component-based software. The technique is named Scenario-Based Reliability Analysis (SBRA). Using scenarios of component interactions, we construct a probabilistic model named Component-Dependency Graph (CDG). Based on CDG, a reliability analysis algorithm is developed to analyze the reliability of the system as a function of reliabilities of its architectural constituents. An extension of the proposed model and algorithm is also developed for distributed software systems. The proposed approach has the following benefits: 1) It is used to analyze the impact of variations and uncertainties in the reliability of individual components, subsystems, and links between components on the overall reliability estimate of the software system. This is particularly useful when the system is built partially or fully from existing off-the-shelf components; 2) It is suitable for analyzing the reliability of distributed software systems because it incorporates link and delivery channel reliabilities; 3) The technique is used to identify critical components, interfaces, and subsystems; and to investigate the sensitivity of the application reliability to these elements; 4) The approach is applicable early in the development lifecycle, at the architecture level. Early detection of critical architecture elements, those that affect the overall reliability of the system the most, is useful in delegating resources in later development phases.  相似文献   
98.
Hydrogen is expected to play a significant role in future energy systems. The efficient production of hydrogen at a minimum cost and in an environmentally acceptable manner is crucial for the development of a hydrogen-including economy. The exergy analysis is a powerful tool to quantify sustainable development potential. An important aspect of sustainable development is minimizing irreversibility. The purpose of this study is to perform the exergy analysis of a steam methane reforming (SMR) process for hydrogen production. As a first step, an exergy analysis of an existing process is shown to be an efficient tool to critically examine the process energy use and to test for possible savings in primary energy consumption. The results of this investigation prove that the exergetic efficiency of the SMR process is 65.47%, and the majority of destroyed exergy is localized in the reformer with a 65.81% contribution to the whole process destroyed exergy. Next, an exergetic parametric study of the SMR has been carried out with a factorial design of experiment (DOE) method. The influence of the reformer operating temperature and pressure and of the steam to carbon ratio (S/C) on the process exergetic efficiency has been studied. A second-order polynomial mathematical model has been obtained through correlating the exergetic efficiencies with the reformer operating parameters. The results of this study show that the rational choice of these parameters can improve the process exergetic performance.  相似文献   
99.
Graphite was modified by 250 keV 37Cl+ ion implantation. Combined Raman microspectrometry/transmission electron microscopy (TEM) studies have been used to characterize the multiscale organization of the graphite structure. The penetration depth of 37Cl+ into the graphite sample was limited to the surface (∼200 nm) because of the dissipation of the irradiating ion energy as expected by secondary ion mass spectrometry analysis. Raman microspectrometry appears to be an appropriate tool for studying such scales. Spectra showed a strong increase of defect bands after implantation at a fluence of 5 · 1013 ions/cm2. In order to examine the structural degradation of the graphite versus the depth at the nanometer scale, the focused ion beam technique seems to be a well-suited method for a relevant coupling of Raman and TEM observations.  相似文献   
100.
In this paper, we discuss the circular open dimension problem (CODP); that is a problem of the cutting/packing family. In CODP, we are given an initial strip of fixed width W and unlimited length, as well as a finite set N of n circular pieces Ci of known radius ri,i ∈ N. The objective is to search for a global optimum corresponding to the minimum length of the initial strip containing the n pieces. We propose an augmented algorithm for solving the CODP which combines a beam search, a binary search and the well-known multi-start strategy. In addition, in order to increase the efficiency of the algorithm, we incorporate a strategy based on the separate beams instead of the pooled ones. The performance of the proposed algorithm is evaluated on a set of benchmark instances composed of a group taken from the literature and another group of randomly generated instances. The results show that the proposed algorithm is able to improve several best known solutions of the literature and it remains competitive for the new generated ones.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号