首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6450篇
  免费   178篇
  国内免费   7篇
电工技术   80篇
综合类   2篇
化学工业   1358篇
金属工艺   130篇
机械仪表   115篇
建筑科学   375篇
矿业工程   7篇
能源动力   148篇
轻工业   525篇
水利工程   51篇
石油天然气   13篇
武器工业   2篇
无线电   399篇
一般工业技术   996篇
冶金工业   1286篇
原子能技术   45篇
自动化技术   1103篇
  2022年   57篇
  2021年   78篇
  2020年   63篇
  2019年   97篇
  2018年   98篇
  2017年   81篇
  2016年   104篇
  2015年   103篇
  2014年   143篇
  2013年   362篇
  2012年   242篇
  2011年   346篇
  2010年   272篇
  2009年   246篇
  2008年   329篇
  2007年   284篇
  2006年   263篇
  2005年   205篇
  2004年   204篇
  2003年   184篇
  2002年   194篇
  2001年   103篇
  2000年   93篇
  1999年   102篇
  1998年   120篇
  1997年   110篇
  1996年   112篇
  1995年   104篇
  1994年   113篇
  1993年   103篇
  1992年   96篇
  1991年   70篇
  1990年   80篇
  1989年   98篇
  1988年   63篇
  1987年   60篇
  1986年   84篇
  1985年   100篇
  1984年   81篇
  1983年   97篇
  1982年   87篇
  1981年   89篇
  1980年   75篇
  1979年   79篇
  1978年   59篇
  1977年   70篇
  1976年   61篇
  1975年   70篇
  1974年   47篇
  1973年   53篇
排序方式: 共有6635条查询结果,搜索用时 15 毫秒
31.
The separation of PVC from contaminants is one of the most important steps in recycling PVC. Earlier works have shown that one can separate PVC from other polymers by using the X-ray fluorescence technique. However, in many cases, even after careful separation, there is a remaining impurity level of about 0.1% due to the limitations of the separation processes. In many applications, impurities, particularly nonmeltables, cause defects in the PVC matrix and must be removed for best performance and appearance. Melt filtration appears to be the best technique to remove the nonmeltable impurities.  相似文献   
32.
Two refinements of Galerkin's method on finite elements were evaluated for the solution of population balance equations for precipitation systems. The traditional drawbacks of this approach have been the time required for computation of the two-dimensional integrals arising from the aggregation integrals and the difficulty in handling discontinuities that often arise in simulations of seeded reactors. The careful arrangement of invariant integrals for separable aggregation models allows for a thousandfold reduction in the computational costs. Discontinuities that may be present due to the hyperbolic nature of the system may be specifically tracked by the method of characteristics. These discontinuities will arise only from the initial distribution or nucleation and are readily identified. A combination of these techniques can be used that is intermediate in computational cost while still allowing discontinuous number densities. In a case study of calcium carbonate precipitation, it is found that the accuracy improvement gained by tracking the slope discontinuity may not be significant and that the computation speed may be sufficient for dynamic online optimization.  相似文献   
33.
A melt granulation process has been investigated (1, 2) which efficiently agglomerates pharmaceutical powders for use in both immediate- and sustained-release solid dosage forms. The process utilizes materials that are effective as granulating fluids when they are in the molten state. Cooling of the agglomerated powders and the resultant solidification of the molten materials completes the granulation process. Both the molten agglomeration and cooling solidification were accomplished in a high shear Collette Gral mixer equipped with a jacketed bowl. Hence, the melt granulation process replaces the conventional granulation and drying operations which use water or alcohol solutions. The melt granulation process has been investigated using immediate- and sustained-release TAVIST® (clemastine fumarate USP) tablet formulations. The TAVIST granulations have been characterized by power consumption monitoring, measurement of the granulation particle size distribution, bulk and tapped density determinations, and loss-on-drying measurements. Scale-up of the melt granulation process for the sustained release TAVIST tablet formulation was judged successful based on a comparison of the hardness, friability, weight uniformity during compression, disintegration time, and dissolution rate data obtained at different manufacturing scales.  相似文献   
34.
35.
Grier  David Alan 《Computer》2007,40(4):8-11
At one time, supercomputing, a term that was synonymous with the Cray-1, was considered to be the best deal in town. However, by the 1990s, manufacturers began to question the need to continue the development of high-speed computers, turning their attention instead to the potential for developing a "high-capacity and high-speed national research and education computer network."  相似文献   
36.
We consider the carrier frequency offset estimation in a digital burst-mode satellite transmission affected by phase noise. The corresponding Cramer-Rao lower bound is analyzed for linear modulations under a Wiener phase noise model and in the hypothesis of knowledge of the transmitted data. Even if we resort to a Monte Carlo average, from a computational point of view the evaluation of the Cramer-Rao bound is very hard. We introduce a simple but very accurate approximation that allows to carry out this task in a very easy way. As it will be shown, the presence of the phase noise produces a remarkable performance degradation of. the frequency estimation accuracy. In addition, we provide asymptotic expressions of the Cramer-Rao bound, from which the effect of the phase noise and the dependence on the system parameters of the frequency offset estimation accuracy clearly result. Finally, as a by-product of our derivations and approximations, we derive a couple of estimators specifically tailored for the phase noise channel that will be compared with the classical Rife and Boorstyn algorithm, gaining in this way some important hints on the estimators to be used in this scenario  相似文献   
37.
38.
The computation of covariance and correlation matrices are critical to many data mining applications and processes. Unfortunately the classical covariance and correlation matrices are very sensitive to outliers. Robust methods, such as Quadrant Correlation (QC) and the Maronna method, have been proposed. However, existing algorithms for QC only give acceptable performance when the dimensionality of the matrix is in the hundreds; and the Maronna method is rarely used in practice because of its high computational cost. In this paper we develop parallel algorithms for both QC and the Maronna method. We evaluate these parallel algorithms using a real data set of the gene expression of over 6000 genes, giving rise to a matrix of over 18 million entries. In our experimental evaluation, we explore scalability in dimensionality and in the number of processors, and the trade-offs between accuracy and computational efficiency. We also compare the parallel behaviours of the two methods. From a statistical standpoint, the Maronna method is more robust than QC. From a computational standpoint, while QC requires less computation, interestingly the Maronna method is much more parallelizable than QC. After a thorough experimentation, we conclude that for many data mining applications, both QC and Maronna are viable options. Less robust, but faster, QC is the recommended choice for small parallel platforms. On the other hand, the Maronna method is the recommended choice when a high degree of robustness is required, or when the parallel platform features a large number of processors (e.g., 32).  相似文献   
39.
General equilibrium moels are usually represented as a system of levels equations (e.g., in North America) or a system of linearized equations (e.g., in Australia). Either representation can be used to obtain accurate solutions. General-purpose software is available in both cases-GAMS or MPS/GE is typically used by levels modellers and GEMPACK by linearizers. Some equations (notably accounting identities) are naturally expressed in the levels while others (especially behavioural equations) are naturally expressed in a linearized form. This paper describes the new GEMPACK facility for solving models represented as a mixture of levels and linearized equations and discusses the advantages to modellers of using such a representation.  相似文献   
40.
This paper surveys recent research in deliberative real-time artificial intelligence (AI). Major areas of study have beenanytime algorithms, approximate processing, and large system architectures. We describe several systems in each of these areas, focusing both on progress within the field, and the costs, benefits and interactions among different problem and algorithm complexity limitations used in the surveyed work.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号