首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   338篇
  免费   16篇
  国内免费   1篇
电工技术   4篇
综合类   3篇
化学工业   55篇
金属工艺   5篇
机械仪表   14篇
建筑科学   12篇
矿业工程   1篇
能源动力   22篇
轻工业   23篇
水利工程   22篇
石油天然气   10篇
无线电   22篇
一般工业技术   91篇
冶金工业   8篇
原子能技术   1篇
自动化技术   62篇
  2024年   1篇
  2023年   5篇
  2022年   7篇
  2021年   11篇
  2020年   18篇
  2019年   27篇
  2018年   31篇
  2017年   20篇
  2016年   16篇
  2015年   11篇
  2014年   12篇
  2013年   15篇
  2012年   14篇
  2011年   26篇
  2010年   13篇
  2009年   17篇
  2008年   18篇
  2007年   11篇
  2006年   11篇
  2005年   5篇
  2004年   21篇
  2003年   9篇
  2002年   15篇
  2000年   1篇
  1999年   1篇
  1998年   3篇
  1996年   2篇
  1994年   2篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1985年   1篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有355条查询结果,搜索用时 8 毫秒
11.
Phytosterols are separated into three classes: 4-desmethylsterols, 4-monomethylsterols and 4,4′-dimethylsterols. 4,4′-Dimethylsterols are used to detect vegetable oil adulteration and some compounds from this class can have anti-inflammatory and anticancer properties. There are methods such as thin layer chromatography (TLC) and solid phase extraction (SPE) used to separate phytosterol classes from each other. However, in some cases, separation of all three classes is not required. In addition, TLC has some drawbacks such as low recovery and it is time consuming. An SPE method has previously been used, but it was necessary to use high volume of solvents with this method to avoid coelution of phytosterol classes. In this study, an SPE (silica, 1 g) method was developed to separate and enrich only 4,4′-dimethylsterols from unsaponifiables of vegetable oil samples using 25 mL n-hexane and diethyl ether (95:5, v:v). This method was applied to hazelnut and olive oils and results were compared with those of TLC and the previously developed SPE method. Recovery of 4,4′-dimethylsterols was two times higher with the new SPE method compared with the TLC method. The newly developed SPE method generally gave a similar recovery compared with the previously developed SPE method. Moreover, the SPE method developed in this study has the advantage of using a 3.5 times lower volume of solvent than previously developed SPE methods. Because the newly developed SPE method has a single step requiring a low volume of solvents, it is rapid and simple, and can easily be used to detect olive oil adulteration with hazelnut oil and to analyze and quantify effective nutritional compounds in the 4,4′-dimethylsterols class.  相似文献   
12.
This research investigates five reference evapotranspiration models (one combined model, one temperature-based model, and three radiation-based models) under hyper-arid environmental conditions at the operational field level. These models were evaluated and calibrated using the weekly water balance of alfalfa by EnviroSCAN to calculate crop evapotranspiration (ETc). Calibration models were evaluated and validated using wheat and potatoes, respectively, on the basis of weekly water balance. Based on the results and discussion, the FAO-56 Penman-Monteith model proved to be superior in estimating ETc with a slight underestimation of 2 %. Meanwhile, the Hargreaves-Samani (HS) model (temperature-based) underestimated ETc by 20 % and the Priestley-Taylor (PT) and Makkink (MK) models (radiation-based) had similar performances underestimating by up to 35 % of the measured ETc. The Turc (TR) model had the lowest performance compared with other models, demonstrating values underestimated by up to 60 % of the measured ETc. Local calibration based on alfalfa evapotranspiration measurements was used to rectify these underestimations. The surprisingly good performance of the calibrated simple HS model, with a new coefficient 0.0029, demonstrated its favorable potential to improve irrigation scheduling. The MK and PT models were in third and fourth rank, respectively, reflecting minor differences between one another. The new coefficients obtained for the MK and PT models were 1.99 and 0.963, respectively. One important observation was that the calibrated TR model performed poorly, with an increase in its coefficient from 0.013 to 0.034 to account for hyper-arid environmental conditions; moreover, it required additional seasonal calibration to adequately improve its performance.  相似文献   
13.
Many water resources optimization problems involve conflicting objectives which the main goal is to find a set of optimal solutions on, or near to, Pareto front. In this study a multi-objective water allocation model was developed for optimization of conjunctive use of surface water and groundwater resources to achieve sustainable supply of agricultural water. Here, the water resource allocation model is based on simulation-optimization (SO) modeling approach. Two surrogate models, namely an Artificial Neural Network model for groundwater level simulation and a Genetic Programming model for TDS concentration prediction were coupled with NSGA-II. The objective functions involved: 1) minimizing water shortage relative to the water demand, 2) minimizing the drawdown of groundwater level, and 3) minimizing the groundwater quality changes. According to the MSE and R2 criteria, the results showed that the surrogate models for prediction of groundwater level and TDS concentration performed favorably in comparison to the measured values at the number of observation wells. In Najaf Abad plain case study, the average drawdown was limited to 0.18 m and the average TDS concentration also decreased from 1257 mg/lit to 1229 mg/lit under optimal conditions.  相似文献   
14.
In the context of water as an economic good, from the use of water, one can derive a value, which can be affected by the reliability of supply. On-demand irrigation systems provide valuable water to skilled farmers who have the capacity to maximize economic value of water. In this study, simultaneous optimization of on-demand irrigation network layout and pipe sizes is considered taking into account both investment and annual energy costs. The optimization problem is formulated as a problem of searching for the upstream head value, which minimizes the total cost (investment and energy costs) of the system. The investment and annual energy costs are obtained in two separate phases. Max–Min ant system (MMAS) algorithm is used to obtain the minimum cost design considering layout and pipe diameters of the network simultaneously. Clement methodology is used to determine flow rates of pipelines at the peak period of irrigation requirements. The applicability of the proposed method is showed by re-designing a real world example from literature.  相似文献   
15.
Abstract

In a first attempt, Middle to Late Eocene Shahbazan Formation as a possible source rock in Dezful Embayment was geochemically investigated. Maturity indicators derived from Rock-Eval pyrolysis (Tmax and PI) and gas chromatography (CPI) show that the organic matter, which dominated by a mixed type II/III kerogen, is thermally mature and has already entered the oil window. A fair to good petroleum-generative potential is suggested by moderate to relatively high values of total organic carbon (TOC) and potential yield (S1+S2). Deposition of Shahbazan Formation under low-oxygen condition, which is represented by low pristane/phytane ratio (<1), conduced to preservation of organic matter. This is in accordance with considerable TOC contents, ranging from 1.01 to 1.72?wt%. The relation between pristane/nC17 and phytane/nC18 as well as terrigenous/aquatic ratio (~1) represent the mixed marine and terrestrially sourced organic matter. Based on the results obtained from this study, Shahbazan Formation could have been acted as a prolific oil and gas source rock.  相似文献   
16.
The peak flow of extraordinary large floods that occur during a period of systematic record is a controversial problem for flood frequency analysis (FFA) using traditional methods. The present study suggests that such floods be treated as historic flood data even though their historical period is unknown. In this paper, the extraordinary large flood peak was first identified using statistical outlier tests and normal probability plots. FFA was then applied with and without the extraordinary large floods. In this step, two goodness-of-fit tests including mean absolute relative deviation and mean squared relative deviation were used to identify the best-fit probability distributions. Next, the generalized extreme value (GEV), three-parameter lognormal (LN3), log-Pearson type III (LP3), and Wakeby (WAK) probability distributions were used to incorporate and adjust the extraordinary large floods with other systematic data. Finally, procedures with and without historical adjustment were compared for the extraordinary large floods in terms of goodness-of-fit and flood return-period quantiles. The results of this comparison indicate that historical adjustment from an operational perspective was more viable than without adjustment procedure. Furthermore, the results without adjustment were unreasonable (subject to over- and under-estimation) and produced physically unrealistic estimates that were not compatible with the study area. The proposed approach substantially improved the probability estimation of rare floods for efficient design of hydraulic structures, risk analysis, and floodplain management.  相似文献   
17.
We present a novel nonuniform quantization compression technique-histogram quantization-for digital holograms of 3-D real-world objects. We exploit a priori knowledge of the distribution of the values in our data. We compare this technique to another histogram based approach: a modified version of Max's algorithm that has been adapted in a straight-forward manner to complex-valued 2-D signals. We conclude the compression procedure by applying lossless techniques to our quantized data. We demonstrate improvements over previous results obtained by applying uniform and nonuniform quantization techniques to the hologram data.  相似文献   
18.
One of the main challenges for future wireless systems is to enhance the effective data throughput by exploiting the allocated bandwidth as much as possible. Among several approaches at different layers, one of the most important is constituted by the so‐called link adaptation (LA) techniques. They are characterized by the adaptation of a set of transmission parameters to the channel state in order to improve performance. In this context, this paper is focused on the analysis of a particular class of LA techniques called adaptive modulation and coding, where the modulation and coding rate of transmission can vary according to the channel behavior. In particular, a novel LA algorithm, namely the timed window (TW) method, suitable for time‐division Duplex systems is proposed here. The performance of the TW algorithm is evaluated by taking actual user mobility conditions, communication channel behavior, as well as the physical layer effects into account. Finally, it is important to stress that, even if the wireless bearer considered in this study is TETRA (TErrestrial Trunked RAdio), the approach is quite general and it can be of interest for other wireless networks and can be optimized for different channel models (e.g. TU50, HT200, etc.). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
19.
Combinatorial optimization problems are often too complex to be solved within reasonable time limits by exact methods, in spite of the theoretical guarantee that such methods will ultimately obtain an optimal solution. Instead, heuristic methods, which do not offer a convergence guarantee, but which have greater flexibility to take advantage of special properties of the search space, are commonly a preferred alternative. The standard procedure is to craft a heuristic method to suit the particular characteristics of the problem at hand, exploiting to the extent possible the structure available. Such tailored methods, however, typically have limited usefulness in other problems domains.An alternative to this problem specific solution approach is a more general methodology that recasts a given problem into a common modeling format, permitting solutions to be derived by a common, rather than tailor-made, heuristic method. Because such general purpose heuristic approaches forego the opportunity to capitalize on domain-specific knowledge, they are characteristically unable to provide the effectiveness or efficiency of special purpose approaches. Indeed, they are typically regarded to have little value except for dealing with small or simple problems.This paper reports on recent work that calls this commonly held view into question. We describe how a particular unified modeling framework, coupled with latest advances in heuristic search methods, makes it possible to solve problems from a wide range of important model classes.Correspondence to: Gary A. Kochenberger.This research was supported in part by ONR grants N000140010598 and N000140310621.  相似文献   
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号