首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1859篇
  免费   131篇
  国内免费   19篇
电工技术   52篇
综合类   8篇
化学工业   525篇
金属工艺   53篇
机械仪表   92篇
建筑科学   55篇
矿业工程   3篇
能源动力   109篇
轻工业   158篇
水利工程   39篇
石油天然气   21篇
武器工业   2篇
无线电   156篇
一般工业技术   323篇
冶金工业   77篇
原子能技术   18篇
自动化技术   318篇
  2024年   4篇
  2023年   57篇
  2022年   96篇
  2021年   137篇
  2020年   122篇
  2019年   132篇
  2018年   149篇
  2017年   123篇
  2016年   131篇
  2015年   87篇
  2014年   102篇
  2013年   167篇
  2012年   130篇
  2011年   106篇
  2010年   80篇
  2009年   80篇
  2008年   52篇
  2007年   45篇
  2006年   33篇
  2005年   14篇
  2004年   9篇
  2003年   14篇
  2002年   11篇
  2001年   4篇
  2000年   9篇
  1999年   11篇
  1998年   18篇
  1997年   10篇
  1996年   11篇
  1995年   9篇
  1994年   10篇
  1993年   6篇
  1992年   7篇
  1991年   6篇
  1990年   2篇
  1989年   3篇
  1988年   4篇
  1987年   3篇
  1986年   1篇
  1985年   1篇
  1984年   6篇
  1983年   1篇
  1982年   1篇
  1981年   3篇
  1980年   1篇
  1968年   1篇
排序方式: 共有2009条查询结果,搜索用时 15 毫秒
141.
Thermodynamics and phase diagrams of lead-free solder materials   总被引:1,自引:0,他引:1  
Many of the existing and most promising lead-free solders for electronics contain tin or tin and indium as a low melting base alloy with small additions of silver and/or copper. Layers of nickel or palladium are frequently used contact materials. This makes the two quaternary systems Ag–Cu–Ni–Sn and Ag–In–Pd–Sn of considerable importance for the understanding of the processes that occur during soldering and during operation of the soldered devices. The present review gives a brief survey on experimental thermodynamic and phase diagram research in our laboratory. Thermodynamic data were obtained by calorimetric measurements, whereas phase equilibria were determined by X-ray diffraction, thermal analyses and metallographic methods (optical and electron microscopy). Enthalpies of mixing for liquid alloys are reported for the binary systems Ag–Sn, Cu–Sn, Ni–Sn, In–Sn, Pd–Sn, and Ag–Ni, the ternary systems Ag–Cu–Sn, Cu–Ni–Sn, Ag–Ni–Sn, Ag–Pd–Sn, In–Pd–Sn, and Ag–In–Sn, and the two quaternary systems themselves, i.e. Ag–Cu–Ni–Sn, and Ag–In–Pd–Sn. Enthalpies of formation are given for solid intermetallic compounds in the three systems Ag–Sn, Cu–Sn, and Ni–Sn. Phase equilibria are presented for binary Ni–Sn and ternary Ag–Ni–Sn, Ag–In–Pd and In–Pd–Sn. In addition, enthalpies of mixing of liquid alloys are also reported for the two ternary systems Bi–Cu–Sn and Bi–Sn–Zn which are of interest for Bi–Sn and Sn–Zn solders.  相似文献   
142.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid in irrigation pipes. A fast, accurate, and robust resolution of the Colebrook–White equation is, in particular, necessary for scientific intensive computations. In this study, the performance of some artificial intelligence approaches, including gene expression programming (GEP), which is a variant of genetic programming (GP); adaptive neurofuzzy inference system (ANFIS); and artificial neural network (ANN) has been compared to the M5 model tree, which is a data mining technique and, to most available approximations, is based on root mean squared error (RMSE), mean absolute error (MAE) and correlation coefficient (R). Results show that Serghides and Buzzelli approximations with RMSE (0.00002), MAE (0.00001), and R (0.99999) values had the best performances. Among the data mining and artificial intelligence approaches, the GEP with RMSE (0.00032), MAE (0.00026), and R (0.99953) values performed better. However, all 20 explicit approximations except Wood, Churchill (full range of turbulence including laminar regime) and Rau and Kumar estimated the friction factor more accurately than the GEP.  相似文献   
143.
Concurrency control is the activity of synchronizing operations issued by concurrent executing transactions on a shared database. The aim of this control is to provide an execution that has the same effect as a serial (non-interleaved) one. The optimistic concurrency control technique allows the transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Effectiveness of the optimistic techniques depends on the conflict rate of transactions. Since different systems have various patterns of conflict and the patterns may also change over time, so applying the optimistic scheme to the entire system results in degradation of performance. In this paper, a novel algorithm is proposed that dynamically selects the optimistic or pessimistic approach based on the value of conflict rate. The proposed algorithm uses an adaptive resonance theory–based neural network in making decision for granting a lock or detection of the winner transaction. In addition, the parameters of this neural network are optimized by a modified gravitational search algorithm. On the other hand, in the real operational environments we know the writeset (WS) and readset (RS) only for a fraction of transactions set before execution. So, the proposed algorithm is designed based on optional knowledge about WS and RS of transactions. Experimental results show that the proposed hybrid concurrency control algorithm results in more than 35 % reduction in the number of aborts in high-transaction rates as compared to strict two-phase locking algorithm that is used in many commercial database systems. This improvement is 13 % as compared to pure-pessimistic approach and is more than 31 % as compared to pure-optimistic approach.  相似文献   
144.
After the establishment of DNA/RNA sequencing as a means of clinical diagnosis, the analysis of the proteome is next in line. As a matter of fact, proteome-based diagnostics is bound to be even more informative, since proteins are directly involved in the actual cellular processes that are responsible for disease. However, the structural variation and the biochemical differences between proteins, the much wider range in concentration and their spatial distribution as well as the fact that protein activity frequently relies on interaction increase the methodological complexity enormously, particularly if an accuracy and robustness is required that is sufficient for clinical utility. Here, we discuss the contribution that protein microarray formats could play towards proteome-based diagnostics.  相似文献   
145.
146.
A new way of implementing two local anomaly detectors in a hyperspectral image is presented in this study. Generally, most local anomaly detector implementations are carried out on the spatial windows of images, because the local area of the image scene is more suitable for a single statistical model than for global data. These detectors are applied by using linear projections. However, these detectors are quite improper if the hyperspectral dataset is adopted as the nonlinear manifolds in spectral space. As multivariate data, the hyperspectral image datasets can be considered to be low-dimensional manifolds embedded in the high-dimensional spectral space. In real environments, the nonlinear spectral mixture occurs more frequently, and these manifolds could be nonlinear. In this case, traditional local anomaly detectors are based on linear projections and cannot distinguish weak anomalies from background data. In this article, local linear manifold learning concepts have been adopted, and anomaly detection algorithms have used spectral space windows with respect to the linear projection. Output performance is determined by comparison between the proposed detectors and the classic spatial local detectors accompanied by the hyperspectral remote-sensing images. The result demonstrates that the effectiveness of the proposed algorithms is promising to improve detection of weak anomalies and to decrease false alarms.  相似文献   
147.
Power efficiency is one of the main challenges in large-scale distributed systems such as datacenters, Grids, and Clouds. One can study the scheduling of applications in such large-scale distributed systems by representing applications as a set of precedence-constrained tasks and modeling them by a Directed Acyclic Graph. In this paper we address the problem of scheduling a set of tasks with precedence constraints on a heterogeneous set of Computing Resources (CRs) with the dual objective of minimizing the overall makespan and reducing the aggregate power consumption of CRs. Most of the related works in this area use Dynamic Voltage and Frequency Scaling (DVFS) approach to achieve these objectives. However, DVFS requires special hardware support that may not be available on all processors in large-scale distributed systems. In contrast, we propose a novel two-phase solution called PASTA that does not require any special hardware support. In its first phase, it uses a novel algorithm to select a subset of available CRs for running an application that can balance between lower overall power consumption of CRs and shorter makespan of application task schedules. In its second phase, it uses a low-complexity power-aware algorithm that creates a schedule for running application tasks on the selected CRs. We show that the overall time complexity of PASTA is $O(p.v^{2})$ where $p$ is the number of CRs and $v$ is the number of tasks. By using simulative experiments on real-world task graphs, we show that the makespan of schedules produced by PASTA are approximately 20 % longer than the ones produced by the well-known HEFT algorithm. However, the schedules produced by PASTA consume nearly 60 % less energy than those produced by HEFT. Empirical experiments on a physical test-bed confirm the power efficiency of PASTA in comparison with HEFT too.  相似文献   
148.
Polymerization of propylene was performed using MgCl2. EtOH.TiCl4.ID.TEA.ED catalyst system in hexane, where internal donor (ID) was an organic diester and external donor (ED) was a silane compound and also triethyl aluminum (TEA) as activator. A new method called isothermal/nonisothermal method (INM), a combination of isothermal and nonisothermal methods, was applied to produce the spherical polymer particles. The effects of the INM method and prepolymerization temperature on the final polymer morphology, Mw, and catalyst activity were also investigated. The morphology of the polymers was evaluated through scanning electron microscopy (SEM) images. GPC results were used for molecular weight (Mw) evaluation. It was found that the polymers had a better morphology when they were prepared using INM method. © 2009 Wiley Periodicals, Inc. J Appl Polym Sci, 2009  相似文献   
149.
Eigenvalue problems arise in many computational science and engineering applications: in structural mechanics, nanoelectronics, and Google’s PageRank link analysis, for example. Often, the large size of these eigenvalue problems requires the development of eigensolvers that scale well on parallel computing platforms. In this paper, we compare the effectiveness and robustness of our eigensolver for the symmetric generalized eigenvalue problem, the trace minimization scheme TraceMIN–developed in the early 1980s–against today’s well-known sparse eigensolvers including: the LOBPCG and block Krylov–Schur implementations in Trilinos; ARPACK; and several methods in the PRIMME package such as the Jacobi–Davidson one. In addition, we demonstrate the parallel scalability of two variants of TraceMIN on multicore nodes as well as on large clusters of such nodes. Our results show that TraceMIN is more robust and has higher parallel scalability than the above-mentioned competing eigensolvers.  相似文献   
150.
Three-dimensional shape recovery from one or multiple observations is a challenging problem of computer vision. In this paper, we present a new Focus Measure for the estimation of a depth map using image focus. This depth map can subsequently be used in techniques and algorithms leading to the recovery of a three-dimensional structure of the object, a requirement of a number of high level vision applications. The proposed Focus Measure has shown robustness in the presence of noise as compared to the earlier Focus Measures. This new Focus Measure is based on an optical transfer function implemented in the Fourier domain. The results of the proposed Focus Measure have shown drastic improvements in estimation of a depth map, with respect to the earlier Focus Measures, in the presence of various types of noise including Gaussian, Shot, and Speckle noises. The results of a range of Focus Measures are compared using root mean square error and correlation metric measures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号