全文获取类型
收费全文 | 1790篇 |
免费 | 115篇 |
国内免费 | 17篇 |
专业分类
电工技术 | 44篇 |
综合类 | 10篇 |
化学工业 | 480篇 |
金属工艺 | 61篇 |
机械仪表 | 83篇 |
建筑科学 | 54篇 |
矿业工程 | 3篇 |
能源动力 | 97篇 |
轻工业 | 154篇 |
水利工程 | 38篇 |
石油天然气 | 21篇 |
武器工业 | 2篇 |
无线电 | 145篇 |
一般工业技术 | 326篇 |
冶金工业 | 70篇 |
原子能技术 | 17篇 |
自动化技术 | 317篇 |
出版年
2024年 | 5篇 |
2023年 | 56篇 |
2022年 | 91篇 |
2021年 | 132篇 |
2020年 | 112篇 |
2019年 | 131篇 |
2018年 | 140篇 |
2017年 | 117篇 |
2016年 | 119篇 |
2015年 | 83篇 |
2014年 | 95篇 |
2013年 | 169篇 |
2012年 | 125篇 |
2011年 | 109篇 |
2010年 | 79篇 |
2009年 | 78篇 |
2008年 | 46篇 |
2007年 | 41篇 |
2006年 | 32篇 |
2005年 | 14篇 |
2004年 | 9篇 |
2003年 | 12篇 |
2002年 | 11篇 |
2001年 | 2篇 |
2000年 | 9篇 |
1999年 | 10篇 |
1998年 | 19篇 |
1997年 | 11篇 |
1996年 | 9篇 |
1995年 | 10篇 |
1994年 | 11篇 |
1993年 | 4篇 |
1992年 | 5篇 |
1991年 | 6篇 |
1990年 | 3篇 |
1989年 | 1篇 |
1987年 | 3篇 |
1986年 | 1篇 |
1984年 | 5篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1981年 | 3篇 |
1980年 | 1篇 |
1968年 | 1篇 |
排序方式: 共有1922条查询结果,搜索用时 9 毫秒
101.
Optimum design of large-scale structures by standard genetic algorithm (GA) makes the computational burden of the process very high. To reduce the computational cost of standard GA, two different strategies are used. The first strategy is by modifying the standard GA, called virtual sub-population method (VSP). The second strategy is by using artificial neural networks for approximating the structural analysis. In this study, radial basis function (RBF), counter propagation (CP) and generalized regression (GR) neural networks are used. Using neural networks within the framework of VSP creates a robust tool for optimum design of structures. 相似文献
102.
Joon-Hyuk Chang Author Vitae Saeed Gazor Author Vitae Author Vitae Sanjit K. Mitra Author Vitae 《Pattern recognition》2007,40(3):1123-1134
Most speech enhancement algorithms are based on the assumption that speech and noise are both Gaussian in the discrete cosine transform (DCT) domain. For further enhancement of noisy speech in the DCT domain, we consider multiple statistical distributions (i.e., Gaussian, Laplacian and Gamma) as a set of candidates to model the noise and speech. We first use the goodness-of-fit (GOF) test in order to measure how far the assumed model deviate from the actual distribution for each DCT component of noisy speech. Our evaluations illustrate that the best candidate is assigned to each frequency bin depending on the Signal-to-Noise-Ratio (SNR) and the Power Spectral Flatness Measure (PSFM). In particular, since the PSFM exhibits a strong relation with the best statistical fit we employ a simple recursive estimation of the PSFM in the model selection. The proposed speech enhancement algorithm employs a soft estimate of the speech absence probability (SAP) separately for each frequency bin according to the selected distribution. Both objective and subjective tests are performed for the evaluation of the proposed algorithms on a large speech database, for various SNR values and types of background noise. Our evaluations show that the proposed soft decision scheme based on multiple statistical modeling or the PSFM provides further speech quality enhancement compared with recent methods through a number of subjective and objective tests. 相似文献
103.
Thermodynamics and phase diagrams of lead-free solder materials 总被引:1,自引:0,他引:1
H. Ipser H. Flandorfer Ch. Luef C. Schmetterer U. Saeed 《Journal of Materials Science: Materials in Electronics》2007,18(1-3):3-17
Many of the existing and most promising lead-free solders for electronics contain tin or tin and indium as a low melting base alloy with small additions of silver and/or copper. Layers of nickel or palladium are frequently used contact materials. This makes the two quaternary systems Ag–Cu–Ni–Sn and Ag–In–Pd–Sn of considerable importance for the understanding of the processes that occur during soldering and during operation of the soldered devices. The present review gives a brief survey on experimental thermodynamic and phase diagram research in our laboratory. Thermodynamic data were obtained by calorimetric measurements, whereas phase equilibria were determined by X-ray diffraction, thermal analyses and metallographic methods (optical and electron microscopy). Enthalpies of mixing for liquid alloys are reported for the binary systems Ag–Sn, Cu–Sn, Ni–Sn, In–Sn, Pd–Sn, and Ag–Ni, the ternary systems Ag–Cu–Sn, Cu–Ni–Sn, Ag–Ni–Sn, Ag–Pd–Sn, In–Pd–Sn, and Ag–In–Sn, and the two quaternary systems themselves, i.e. Ag–Cu–Ni–Sn, and Ag–In–Pd–Sn. Enthalpies of formation are given for solid intermetallic compounds in the three systems Ag–Sn, Cu–Sn, and Ni–Sn. Phase equilibria are presented for binary Ni–Sn and ternary Ag–Ni–Sn, Ag–In–Pd and In–Pd–Sn. In addition, enthalpies of mixing of liquid alloys are also reported for the two ternary systems Bi–Cu–Sn and Bi–Sn–Zn which are of interest for Bi–Sn and Sn–Zn solders. 相似文献
104.
Saeed Samadianfard Mohammad Taghi Sattari Ozgur Kisi Honeyeh Kazemi 《Applied Artificial Intelligence》2013,27(8):793-813
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid in irrigation pipes. A fast, accurate, and robust resolution of the Colebrook–White equation is, in particular, necessary for scientific intensive computations. In this study, the performance of some artificial intelligence approaches, including gene expression programming (GEP), which is a variant of genetic programming (GP); adaptive neurofuzzy inference system (ANFIS); and artificial neural network (ANN) has been compared to the M5 model tree, which is a data mining technique and, to most available approximations, is based on root mean squared error (RMSE), mean absolute error (MAE) and correlation coefficient (R). Results show that Serghides and Buzzelli approximations with RMSE (0.00002), MAE (0.00001), and R (0.99999) values had the best performances. Among the data mining and artificial intelligence approaches, the GEP with RMSE (0.00032), MAE (0.00026), and R (0.99953) values performed better. However, all 20 explicit approximations except Wood, Churchill (full range of turbulence including laminar regime) and Rau and Kumar estimated the friction factor more accurately than the GEP. 相似文献
105.
Concurrency control is the activity of synchronizing operations issued by concurrent executing transactions on a shared database. The aim of this control is to provide an execution that has the same effect as a serial (non-interleaved) one. The optimistic concurrency control technique allows the transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Effectiveness of the optimistic techniques depends on the conflict rate of transactions. Since different systems have various patterns of conflict and the patterns may also change over time, so applying the optimistic scheme to the entire system results in degradation of performance. In this paper, a novel algorithm is proposed that dynamically selects the optimistic or pessimistic approach based on the value of conflict rate. The proposed algorithm uses an adaptive resonance theory–based neural network in making decision for granting a lock or detection of the winner transaction. In addition, the parameters of this neural network are optimized by a modified gravitational search algorithm. On the other hand, in the real operational environments we know the writeset (WS) and readset (RS) only for a fraction of transactions set before execution. So, the proposed algorithm is designed based on optional knowledge about WS and RS of transactions. Experimental results show that the proposed hybrid concurrency control algorithm results in more than 35 % reduction in the number of aborts in high-transaction rates as compared to strict two-phase locking algorithm that is used in many commercial database systems. This improvement is 13 % as compared to pure-pessimistic approach and is more than 31 % as compared to pure-optimistic approach. 相似文献
106.
Christian Betzen Mohamed Saiel Saeed Alhamdani Smiths Lueong Christoph Schröder Axel Stang Jörg D. Hoheisel 《Proteomics. Clinical applications》2015,9(3-4):342-347
After the establishment of DNA/RNA sequencing as a means of clinical diagnosis, the analysis of the proteome is next in line. As a matter of fact, proteome-based diagnostics is bound to be even more informative, since proteins are directly involved in the actual cellular processes that are responsible for disease. However, the structural variation and the biochemical differences between proteins, the much wider range in concentration and their spatial distribution as well as the fact that protein activity frequently relies on interaction increase the methodological complexity enormously, particularly if an accuracy and robustness is required that is sufficient for clinical utility. Here, we discuss the contribution that protein microarray formats could play towards proteome-based diagnostics. 相似文献
107.
108.
What information may be extracted over urban area by means of joint analysis of two-dimensional (2D) and three-dimensional (3D) remote sensing data? We exploit aerial, Synthetic Aperture Radar (SAR) and Laser Induced Detection and Ranging (LIDAR) data to characterize precisely the Presidio area in San Francisco. We discriminate between different objects in the scene using their 2D and 3D characteristics. The final product of the analysis is a set of raster or vector information layers providing land covers, 3D building shapes and Digital Terrain Models (DTMs) of the Presidio. This paper investigates the relative merits of the collected data in retrieving each of these information layers, and examines how automatic algorithms to extract land cover, Digital Terrain Model (DTM) and 3D building shape could be integrated in a processing chain. 相似文献
109.
Power efficiency is one of the main challenges in large-scale distributed systems such as datacenters, Grids, and Clouds. One can study the scheduling of applications in such large-scale distributed systems by representing applications as a set of precedence-constrained tasks and modeling them by a Directed Acyclic Graph. In this paper we address the problem of scheduling a set of tasks with precedence constraints on a heterogeneous set of Computing Resources (CRs) with the dual objective of minimizing the overall makespan and reducing the aggregate power consumption of CRs. Most of the related works in this area use Dynamic Voltage and Frequency Scaling (DVFS) approach to achieve these objectives. However, DVFS requires special hardware support that may not be available on all processors in large-scale distributed systems. In contrast, we propose a novel two-phase solution called PASTA that does not require any special hardware support. In its first phase, it uses a novel algorithm to select a subset of available CRs for running an application that can balance between lower overall power consumption of CRs and shorter makespan of application task schedules. In its second phase, it uses a low-complexity power-aware algorithm that creates a schedule for running application tasks on the selected CRs. We show that the overall time complexity of PASTA is $O(p.v^{2})$ where $p$ is the number of CRs and $v$ is the number of tasks. By using simulative experiments on real-world task graphs, we show that the makespan of schedules produced by PASTA are approximately 20 % longer than the ones produced by the well-known HEFT algorithm. However, the schedules produced by PASTA consume nearly 60 % less energy than those produced by HEFT. Empirical experiments on a physical test-bed confirm the power efficiency of PASTA in comparison with HEFT too. 相似文献
110.
Polymerization of propylene was performed using MgCl2. EtOH.TiCl4.ID.TEA.ED catalyst system in hexane, where internal donor (ID) was an organic diester and external donor (ED) was a silane compound and also triethyl aluminum (TEA) as activator. A new method called isothermal/nonisothermal method (INM), a combination of isothermal and nonisothermal methods, was applied to produce the spherical polymer particles. The effects of the INM method and prepolymerization temperature on the final polymer morphology, Mw, and catalyst activity were also investigated. The morphology of the polymers was evaluated through scanning electron microscopy (SEM) images. GPC results were used for molecular weight (Mw) evaluation. It was found that the polymers had a better morphology when they were prepared using INM method. © 2009 Wiley Periodicals, Inc. J Appl Polym Sci, 2009 相似文献