首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1197篇
  免费   82篇
  国内免费   1篇
电工技术   6篇
综合类   2篇
化学工业   271篇
金属工艺   28篇
机械仪表   31篇
建筑科学   38篇
矿业工程   4篇
能源动力   58篇
轻工业   95篇
水利工程   10篇
石油天然气   3篇
无线电   113篇
一般工业技术   263篇
冶金工业   82篇
原子能技术   12篇
自动化技术   264篇
  2024年   1篇
  2023年   21篇
  2022年   34篇
  2021年   53篇
  2020年   43篇
  2019年   34篇
  2018年   46篇
  2017年   47篇
  2016年   55篇
  2015年   41篇
  2014年   71篇
  2013年   80篇
  2012年   94篇
  2011年   123篇
  2010年   68篇
  2009年   60篇
  2008年   67篇
  2007年   65篇
  2006年   44篇
  2005年   35篇
  2004年   25篇
  2003年   23篇
  2002年   15篇
  2001年   6篇
  2000年   13篇
  1999年   17篇
  1998年   20篇
  1997年   13篇
  1996年   13篇
  1995年   6篇
  1994年   5篇
  1993年   1篇
  1992年   4篇
  1991年   3篇
  1990年   1篇
  1989年   2篇
  1988年   4篇
  1987年   2篇
  1986年   5篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1978年   1篇
  1977年   7篇
  1976年   2篇
  1975年   1篇
  1974年   1篇
  1973年   1篇
  1972年   1篇
排序方式: 共有1280条查询结果,搜索用时 15 毫秒
91.
Digital Elevation Models (DEMs) are used to compute the hydro-geomorphological variables required by distributed hydrological models. However, the resolution of the most precise DEMs is too fine to run these models over regional watersheds. DEMs therefore need to be aggregated to coarser resolutions, affecting both the representation of the land surface and the hydrological simulations. In the present paper, six algorithms (mean, median, mode, nearest neighbour, maximum and minimum) are used to aggregate the Shuttle Radar Topography Mission (SRTM) DEM from 3″ (90 m) to 5′ (10 km) in order to simulate the water balance of the Lake Chad basin (2.5 Mkm2). Each of these methods is assessed with respect to selected hydro-geomorphological properties that influence Terrestrial Hydrology Model with Biogeochemistry (THMB) simulations, namely the drainage network, the Lake Chad bottom topography and the floodplain extent.The results show that mean and median methods produce a smoother representation of the topography. This smoothing involves the removing of the depressions governing the floodplain dynamics (floodplain area<5000 km2) but it eliminates the spikes and wells responsible for deviations regarding the drainage network. By contrast, using other aggregation methods, a rougher relief representation enables the simulation of a higher floodplain area (>14,000 km2 with the maximum or nearest neighbour) but results in anomalies concerning the drainage network. An aggregation procedure based on a variographic analysis of the SRTM data is therefore suggested. This consists of preliminary filtering of the 3″ DEM in order to smooth spikes and wells, then resampling to 5′ via the nearest neighbour method so as to preserve the representation of depressions. With the resulting DEM, the drainage network, the Lake Chad bathymetric curves and the simulated floodplain hydrology are consistent with the observations (3% underestimation for simulated evaporation volumes).  相似文献   
92.
A thermodynamic modeling and optimization of the La-Mg system is carried out by means of the CALPHAD method taking into account the latest experimental results. The liquid, bcc-La, fcc-La, dhcp-La and hcp-Mg solutions are modeled as substitutional solutions (La, Mg) using the Redlich-Kister formalism. The LaMg, LaMg2, La5Mg41, La2Mg17 and LaMg12 phases are treated as stoichiometric compounds, and the non-stoichiometry of LaMg3 is described as (La,Mg)0.25Mg0.75. The results are in good agreement with the set of experimental data which were carefully discussed and selected.  相似文献   
93.
Long, slow hemodialysis (3 × 8 hours/week) has been used without significant modification in Tassin, France, for 30 years with excellent morbidity and mortality rates. A long dialysis session easily provides high Kt/Vurea and allows for good control of nutrition and correction of anemia with a limited need for erythropoietin (EPO). Control of serum phosphate and potassium is usually achieved with low-dose medication. The good survival achieved by long hemodialysis sessions is essentially due to lower cardiovascular morbidity and mortality than in short dialysis sessions. This, in turn, is mainly explained by good blood pressure (BP) control without the need for antihypertensive medication. Normotension in this setting is due to the gentle but powerful ultrafiltration provided by the long sessions, associated with a low salt diet and moderate interdialytic weight gains. These allow for adequate control of extracellular volume (dry weight) in most patients without important intradialytic morbidity. Therefore, increasing the length of the dialysis session seems to be the best way of achieving satisfactory long-term clinical results.  相似文献   
94.
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak.  相似文献   
95.
Deduction modulo is a way to combine computation and deduction in proofs, by applying the inference rules of a deductive system (e.g. natural deduction or sequent calculus) modulo some congruence that we assume here to be presented by a set of rewrite rules. Using deduction modulo is equivalent to proving in a theory corresponding to the rewrite rules, and leads to proofs that are often shorter and more readable. However, cuts may be not admissible anymore.We define a new system, the unfolding sequent calculus, and prove its equivalence with the sequent calculus modulo, especially w.r.t. cut-free proofs. It permits to show that it is even undecidable to know if cuts can be eliminated in the sequent calculus modulo a given rewrite system.Then, to recover the cut admissibility, we propose a procedure to complete the rewrite system such that the sequent calculus modulo the resulting system admits cuts. This is done by generalizing the Knuth–Bendix completion in a non-trivial way, using the framework of abstract canonical systems.These results enlighten the entanglement between computation and deduction, and the power of abstract completion procedures. They also provide an effective way to obtain systems admitting cuts, therefore extending the applicability of deduction modulo in automated theorem proving.  相似文献   
96.
We present versatile anycast, which allows a service running on a varying collection of nodes scattered over a wide‐area network to present itself to the clients as one running on a single node. Providing a single logical address enables the client‐side software to preserve the traditional service access model based on single access points. At the same time, the dynamic composition of anycast groups implemented by versatile anycast enables the server‐side service infrastructure to evolve and adapt to changing network conditions. We implement versatile anycast using Mobile IPv6, which decouples the logical addresses of mobile nodes from their physical location. We exploit that decoupling to implement logical service addresses that are not bound to any physical nodes, and employ standard MIPv6 mechanisms to dynamically map each such address onto individual service nodes. Our solution enables a service to transparently hand off clients among the service nodes at the network level while preserving optimal routing between the clients and the service nodes. We demonstrate that the overhead of versatile anycasting is very low. In particular, the client‐perceived handoff time is shown to be a linear function of the latencies among the client and the service nodes participating in the handoff. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   
97.
3D geological models commonly built to manage natural resources are much affected by uncertainty because most of the subsurface is inaccessible to direct observation. Appropriate ways to intuitively visualize uncertainties are therefore critical to draw appropriate decisions. However, empirical assessments of uncertainty visualization for decision making are currently limited to 2D map data, while most geological entities are either surfaces embedded in a 3D space or volumes.This paper first reviews a typical example of decision making under uncertainty, where uncertainty visualization methods can actually make a difference. This issue is illustrated on a real Middle East oil and gas reservoir, looking for the optimal location of a new appraisal well. In a second step, we propose a user study that goes beyond traditional 2D map data, using 2.5D pressure data for the purposes of well design. Our experiments study the quality of adjacent versus coincident representations of spatial uncertainty as compared to the presentation of data without uncertainty; the representations' quality is assessed in terms of decision accuracy. Our study was conducted within a group of 123 graduate students specialized in geology.  相似文献   
98.
Coastal water mapping from remote-sensing hyperspectral data suffers from poor retrieval performance when the targeted parameters have little effect on subsurface reflectance, especially due to the ill-posed nature of the inversion problem. For example, depth cannot accurately be retrieved for deep water, where the bottom influence is negligible. Similarly, for very shallow water it is difficult to estimate the water quality because the subsurface reflectance is affected more by the bottom than by optically active water components.

Most methods based on radiative transfer model inversion do not consider the distribution of targeted parameters within the inversion process, thereby implicitly assuming that any parameter value in the estimation range has the same probability. In order to improve the estimation accuracy for the above limiting cases, we propose to regularize the objective functions of two estimation methods (maximum likelihood or ML, and hyperspectral optimization process exemplar, or HOPE) by introducing local prior knowledge on the parameters of interest. To do so, loss functions are introduced into ML and HOPE objective functions in order to reduce the range of parameter estimation. These loss functions can be characterized either by using prior or expert knowledge, or by inferring this knowledge from the data (thus avoiding the use of additional information).

This approach was tested both on simulated and real hyperspectral remote-sensing data. We show that the regularized objective functions are more peaked than their non-regularized counterparts when the parameter of interest has little effect on subsurface reflectance. As a result, the estimation accuracy of regularized methods is higher for these depth ranges. In particular, when evaluated on real data, these methods were able to estimate depths up to 20 m, while corresponding non-regularized methods were accurate only up to 13 m on average for the same data.

This approach thus provides a solution to deal with such difficult estimation conditions. Furthermore, because no specific framework is needed, it can be extended to any estimation method that is based on iterative optimization.  相似文献   
99.
This study extended the work of S. Siddiqui, R. F. West, and K. E. Stanovich (1998), who studied the link between general print exposure and syllogistic reasoning. It was hypothesized that exposure to certain text structures that contain well-delineated logical forms, such as popularized scientific texts, would be a better predictor of deductive reasoning skill than general print exposure, which is not sensitive to the quality of an individual's reading activity. Furthermore, it was predicted that the ability to generate explanatory bridging inferences while reading would also be predictive of syllogistic reasoning. Undergraduate students (N = 112) were tested for vocabulary, nonverbal cognitive ability, exposure to general print, exposure to popularized scientific literature, and the ability to comprehend texts distinguished by the number of inferences that must be generated to support comprehension. Hierarchical multiple regression analyses showed that a combined measure of exposure to general and scientific literature was a significant predictor of syllogistic reasoning ability. Additionally, the ability to comprehend high-inference-load texts was related to solving syllogisms that were inconsistent with world knowledge, indicating an overlap in deductive reasoning skill and text comprehension processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
100.

This paper is concerned with the investigation of the shear effect on the dynamic behavior of a thin microcantilever beam with manufacturing process defects. Unlike the Rayleigh beam model (RBM), the Timoshenko beam model (TBM) takes in consideration the shear effect on the resonance frequency. This effect become significant for thin microcantilever beams with larger slenderness ratios that are normally encountered in MEMS devices such as sensors. The TBM model is presented and analyzed by numerical simulation using Finite Element Method (FEM) to determine corrective factors for the correction of the effect of manufacturing process defects like the underetching at the clamped end of the microbeam and the nonrectangular cross section of the area. A semi-analytical approach is proposed for the extraction of the Young’s modulus from 3D FEM simulation with COMSOL Multiphysics software. This model was tested on measurements of a thin chromium microcantilever beam of dimensions (80 × 2 × 0.95 μm3). Final results indicate that the correction of the effect of manufacturing process defects is significant where the corrected value of Young’s modulus is very close to the experimental results and it is about 280.81 GPa.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号