首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1928篇
  免费   89篇
  国内免费   78篇
电工技术   25篇
综合类   57篇
化学工业   94篇
金属工艺   31篇
机械仪表   55篇
建筑科学   83篇
矿业工程   151篇
能源动力   40篇
轻工业   120篇
水利工程   16篇
石油天然气   47篇
武器工业   11篇
无线电   144篇
一般工业技术   135篇
冶金工业   18篇
原子能技术   39篇
自动化技术   1029篇
  2024年   8篇
  2023年   14篇
  2022年   23篇
  2021年   34篇
  2020年   27篇
  2019年   28篇
  2018年   28篇
  2017年   46篇
  2016年   60篇
  2015年   60篇
  2014年   101篇
  2013年   100篇
  2012年   106篇
  2011年   166篇
  2010年   118篇
  2009年   124篇
  2008年   123篇
  2007年   136篇
  2006年   134篇
  2005年   115篇
  2004年   112篇
  2003年   109篇
  2002年   42篇
  2001年   38篇
  2000年   29篇
  1999年   33篇
  1998年   31篇
  1997年   20篇
  1996年   16篇
  1995年   22篇
  1994年   11篇
  1993年   7篇
  1992年   4篇
  1991年   3篇
  1990年   5篇
  1988年   4篇
  1987年   2篇
  1986年   3篇
  1985年   5篇
  1984年   4篇
  1983年   5篇
  1982年   7篇
  1981年   4篇
  1980年   3篇
  1979年   9篇
  1978年   4篇
  1977年   4篇
  1976年   3篇
  1974年   2篇
  1973年   1篇
排序方式: 共有2095条查询结果,搜索用时 15 毫秒
991.
In this paper, we consider the communications involved by the execution of a complex application, deployed on a heterogeneous large-scale distributed platform. Such applications intensively use collective macro-communication schemes, such as scatters, personalized all-to-alls or gather/reduce operations. Rather than aiming at minimizing the execution time of a single macro-communication, we focus on the steady-state operation. We assume that there is a large number of macro-communications to perform in pipeline fashion, and we aim at maximizing the throughput, i.e., the (rational) number of macro-communications which can be initiated every time-step. We target heterogeneous platforms, modeled by a graph where resources have different communication and computation speeds. The situation is simpler for series of scatters or personalized all-to-alls than for series of reduces operations, because of the possibility of combining various partial reductions of the local values, and of interleaving computations with communications. In all cases, we show how to determine the optimal throughput, and how to exhibit a concrete periodic schedule that achieves this throughput.  相似文献   
992.
ContextIn software project management, the distribution of resources to various project activities is one of the most challenging problems since it affects team productivity, product quality and project constraints related to budget and scheduling.ObjectiveThe study aims to (a) reveal the high complexity of modelling the effort usage proportion in different phases as well as the divergence from various rules-of-thumb in related literature, and (b) present a systematic data analysis framework, able to offer better interpretations and visualisation of the effort distributed in specific phases.MethodThe basis for the proposed multivariate statistical framework is Compositional Data Analysis, a methodology appropriate for proportions, along with other methods like the deviation from rules-of-thumb, the cluster analysis and the analysis of variance. The effort allocations to phases, as reported in around 1500 software projects of the ISBSG R11 repository, were transformed to vectors of proportions of the total effort and were analysed with respect to prime project attributes.ResultsThe proposed statistical framework was able to detect high dispersion among data, distribution inequality and various interesting correlations and trends, groupings and outliers, especially with respect to other categorical and continuous project attributes. Only a very small number of projects were found close to the rules-of-thumb from the related literature. Significant differences in the proportion of effort spent in different phrases for different types of projects were found.ConclusionThere is no simple model for the effort allocated to phases of software projects. The data from previous projects can provide valuable information regarding the distribution of the effort for various types of projects, through analysis with multivariate statistical methodologies. The proposed statistical framework is generic and can be easily applied in a similar sense to any dataset containing effort allocation to phases.  相似文献   
993.
Cluster variation method (CVM) was applied to calculate phase equilibria of metal-hydrogen systems. Two subjects are introduced in the present report. One is the summary of previous studies on the Pd-H system, and it is demonstrated that a single CVM free energy formula can systematically derive information of phase equilibria, intrinsic stability, and short range order diffuse intensities. The second subject is the theoretical calculations of superabundant vacancy (SAV) formation. Within the square approximation of the CVM, it is shown that abundant vacancies are introduced with the absorption of hydrogen when the interaction between vacancy and hydrogen is considered. This article was presented at the Multi-Component Alloy Thermodynamics Symposium sponsored by the Alloy Phase Committee of the joint EMPMD/SMD of The Minerals, Metals, and Materials Society (TMS), held in San Antonio, Texas, March 12-16, 2006, to honor the 2006 William Hume-Rothery Award recipient, Professor W. Alan Oates of the University of Salford, UK. The symposium was organized by Y. Austin Chang of the University of Wisconsin, Madison, WI, Patrice Turchi of the Lawrence Livermore National Laboratory, Livermore, CA, and Rainer Schmid-Fetzer of the Technische Universitat Clausthal, Clauthal-Zellerfeld, Germany.  相似文献   
994.
基于聚类分析原理建立化学反应速率方程   总被引:1,自引:1,他引:0  
基于建立化学反应速率方程是对化学反应进行分类的观点, 提出了应用聚类分析技术中的相似系数法和距离法确定化学反应级数和速率常数的方法。该方法具有可靠性高、易于编程的特点, 并可用于确定非整数型反应级数。  相似文献   
995.
结合市场导向下的技术驱动重要性,重点论述现代卫星系统新技术及其在中国的应用机遇和发展策略。  相似文献   
996.
高可用性软件的设计与实现   总被引:7,自引:1,他引:6  
罗娟  曹阳  郑刚  张俊新 《计算机工程》2004,30(8):19-20,67
从高可用性系统的类型和功能出发,在分析系统失效因素和高可用系统结构的基础上,设计了一个高可用性的软件结构系统,它可以运行于主备、双机和多机3种工作模式,并以有限状态机的状态转移图,具体实现了链路管理和业务接管等关键技术,最后将所设计的软件应用到实际的系统之中,能够很好地满足需求。  相似文献   
997.
A generic mass flow measurement device was developed as a variation on the theme of counting. In a hypothetical infinitely sparse mass flow, the number of passing particles could be counted in a time frame and multiplied by the mean mass per particle to obtain a mass flow per time unit. In a mass flow of realistic density, however, particles travel in cluster formation and direct counting of individual particles is impossible. If a method could be available that reconstructs the original number of particles in a cluster, the mass flow can be computed for realistic clustered mass flows. This reconstruction algorithm was developed in this research; it uses the measured cluster lengths to reconstruct the total number of particles in each passing cluster. The lengths of the clusters were measured with an optoelectronic device. The reconstruction algorithm was developed using simulation, augmented by clustering theory. For identical diameter particle flow, simulation results showed that the number of particles in a cluster could be reconstructed using a very simple reconstruction formula. This formula uses only the total number of clusters per time frame and the total number of individual particles measured in the same time frame. However, identical diameter flow is not realistic, since even identical particles are measured with a certain error. Reconstruction of the realistic distributed diameter particle flow was approximated using the identical particle method. The optical mass flow sensor has major advantages over traditional methods. It is virtually insensitive to vibrations, contamination, temperature drift, and misalignment and the underlying measurement concept is well understood. But most importantly, the sensor does not require calibration. The mass flow of identical particles (4.5 mm air gun pellets) was measured with an error smaller than 3% even for high density flow rates.  相似文献   
998.
We consider two dissimilarity measures between variables that take account of the variances of the variables as well as of their correlations. When variables are standardised, we retrieve widely used dissimilarity measures. The first dissimilarity measure is Euclidean distance and is suitable in studies where negative correlation between variables implies disagreement. The second dissimilarity measure is a Procrustean distance and is suitable in situations where both positive and negative correlations imply agreement. We also discuss aggregation strategies in order to carry out hierarchical clustering and find groups of variables. Applications in consumer and sensory studies are outlined.  相似文献   
999.
Nanodispersed targets of gold (grains sized at 2–150 nm) were irradiated with 956 MeV ions of Pb54+ ((dE/dx)e in gold 87 keV/nm). Ejected gold was gathered on collectors. Desorbed nanoclusters of gold were detected by means of TEM while the total matter transfer was measured by neutron activation analysis. For all the targets a part of ejected gold presents nanoclusters in the same size range as that of the grains on the corresponding targets. Desorption of nanoclusters with the size up to 90 nm was observed for the first time for atomic primary ions in the electronic stopping regime. The yield of the desorbed nanoclusters decreases from 22 to 1.4 cluster/ion with increasing the mean grain size from 6 to 30 nm. The total matter transfer measured for the target with the grain size 6–10 nm has a great value – 5 × 105 at./ion. Results are discussed.  相似文献   
1000.
俞枫 《计算机工程》2006,32(9):247-250
针对中国券商经纪业务正在进行经营模式转型的现实需要,该文提出采用低成本的Intel架构建设大型券商集中交易系统的方案,并从可靠性、高效性、扩充性、经济性的角度分析了系统的部署结构、关键技术和业务架构,揭示了基于低成本Intel架构的集中交易系统已达到券商进行经纪业务转型的要求。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号