Digital Elevation Models (DEMs) are used to compute the hydro-geomorphological variables required by distributed hydrological models. However, the resolution of the most precise DEMs is too fine to run these models over regional watersheds. DEMs therefore need to be aggregated to coarser resolutions, affecting both the representation of the land surface and the hydrological simulations. In the present paper, six algorithms (mean, median, mode, nearest neighbour, maximum and minimum) are used to aggregate the Shuttle Radar Topography Mission (SRTM) DEM from 3″ (90 m) to 5′ (10 km) in order to simulate the water balance of the Lake Chad basin (2.5 Mkm2). Each of these methods is assessed with respect to selected hydro-geomorphological properties that influence Terrestrial Hydrology Model with Biogeochemistry (THMB) simulations, namely the drainage network, the Lake Chad bottom topography and the floodplain extent.The results show that mean and median methods produce a smoother representation of the topography. This smoothing involves the removing of the depressions governing the floodplain dynamics (floodplain area<5000 km2) but it eliminates the spikes and wells responsible for deviations regarding the drainage network. By contrast, using other aggregation methods, a rougher relief representation enables the simulation of a higher floodplain area (>14,000 km2 with the maximum or nearest neighbour) but results in anomalies concerning the drainage network. An aggregation procedure based on a variographic analysis of the SRTM data is therefore suggested. This consists of preliminary filtering of the 3″ DEM in order to smooth spikes and wells, then resampling to 5′ via the nearest neighbour method so as to preserve the representation of depressions. With the resulting DEM, the drainage network, the Lake Chad bathymetric curves and the simulated floodplain hydrology are consistent with the observations (3% underestimation for simulated evaporation volumes). 相似文献
A thermodynamic modeling and optimization of the La-Mg system is carried out by means of the CALPHAD method taking into account the latest experimental results. The liquid, bcc-La, fcc-La, dhcp-La and hcp-Mg solutions are modeled as substitutional solutions (La, Mg) using the Redlich-Kister formalism. The LaMg, LaMg2, La5Mg41, La2Mg17 and LaMg12 phases are treated as stoichiometric compounds, and the non-stoichiometry of LaMg3 is described as (La,Mg)0.25Mg0.75. The results are in good agreement with the set of experimental data which were carefully discussed and selected. 相似文献
Long, slow hemodialysis (3 × 8 hours/week) has been used without significant modification in Tassin, France, for 30 years with excellent morbidity and mortality rates. A long dialysis session easily provides high Kt/Vurea and allows for good control of nutrition and correction of anemia with a limited need for erythropoietin (EPO). Control of serum phosphate and potassium is usually achieved with low-dose medication. The good survival achieved by long hemodialysis sessions is essentially due to lower cardiovascular morbidity and mortality than in short dialysis sessions. This, in turn, is mainly explained by good blood pressure (BP) control without the need for antihypertensive medication. Normotension in this setting is due to the gentle but powerful ultrafiltration provided by the long sessions, associated with a low salt diet and moderate interdialytic weight gains. These allow for adequate control of extracellular volume (dry weight) in most patients without important intradialytic morbidity. Therefore, increasing the length of the dialysis session seems to be the best way of achieving satisfactory long-term clinical results. 相似文献
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak. 相似文献
Deduction modulo is a way to combine computation and deduction in proofs, by applying the inference rules of a deductive system (e.g. natural deduction or sequent calculus) modulo some congruence that we assume here to be presented by a set of rewrite rules. Using deduction modulo is equivalent to proving in a theory corresponding to the rewrite rules, and leads to proofs that are often shorter and more readable. However, cuts may be not admissible anymore.We define a new system, the unfolding sequent calculus, and prove its equivalence with the sequent calculus modulo, especially w.r.t. cut-free proofs. It permits to show that it is even undecidable to know if cuts can be eliminated in the sequent calculus modulo a given rewrite system.Then, to recover the cut admissibility, we propose a procedure to complete the rewrite system such that the sequent calculus modulo the resulting system admits cuts. This is done by generalizing the Knuth–Bendix completion in a non-trivial way, using the framework of abstract canonical systems.These results enlighten the entanglement between computation and deduction, and the power of abstract completion procedures. They also provide an effective way to obtain systems admitting cuts, therefore extending the applicability of deduction modulo in automated theorem proving. 相似文献
3D geological models commonly built to manage natural resources are much affected by uncertainty because most of the subsurface is inaccessible to direct observation. Appropriate ways to intuitively visualize uncertainties are therefore critical to draw appropriate decisions. However, empirical assessments of uncertainty visualization for decision making are currently limited to 2D map data, while most geological entities are either surfaces embedded in a 3D space or volumes.This paper first reviews a typical example of decision making under uncertainty, where uncertainty visualization methods can actually make a difference. This issue is illustrated on a real Middle East oil and gas reservoir, looking for the optimal location of a new appraisal well. In a second step, we propose a user study that goes beyond traditional 2D map data, using 2.5D pressure data for the purposes of well design. Our experiments study the quality of adjacent versus coincident representations of spatial uncertainty as compared to the presentation of data without uncertainty; the representations' quality is assessed in terms of decision accuracy. Our study was conducted within a group of 123 graduate students specialized in geology. 相似文献
Coastal water mapping from remote-sensing hyperspectral data suffers from poor retrieval performance when the targeted parameters have little effect on subsurface reflectance, especially due to the ill-posed nature of the inversion problem. For example, depth cannot accurately be retrieved for deep water, where the bottom influence is negligible. Similarly, for very shallow water it is difficult to estimate the water quality because the subsurface reflectance is affected more by the bottom than by optically active water components.Most methods based on radiative transfer model inversion do not consider the distribution of targeted parameters within the inversion process, thereby implicitly assuming that any parameter value in the estimation range has the same probability. In order to improve the estimation accuracy for the above limiting cases, we propose to regularize the objective functions of two estimation methods (maximum likelihood or ML, and hyperspectral optimization process exemplar, or HOPE) by introducing local prior knowledge on the parameters of interest. To do so, loss functions are introduced into ML and HOPE objective functions in order to reduce the range of parameter estimation. These loss functions can be characterized either by using prior or expert knowledge, or by inferring this knowledge from the data (thus avoiding the use of additional information).This approach was tested both on simulated and real hyperspectral remote-sensing data. We show that the regularized objective functions are more peaked than their non-regularized counterparts when the parameter of interest has little effect on subsurface reflectance. As a result, the estimation accuracy of regularized methods is higher for these depth ranges. In particular, when evaluated on real data, these methods were able to estimate depths up to 20 m, while corresponding non-regularized methods were accurate only up to 13 m on average for the same data.This approach thus provides a solution to deal with such difficult estimation conditions. Furthermore, because no specific framework is needed, it can be extended to any estimation method that is based on iterative optimization. 相似文献
This study extended the work of S. Siddiqui, R. F. West, and K. E. Stanovich (1998), who studied the link between general print exposure and syllogistic reasoning. It was hypothesized that exposure to certain text structures that contain well-delineated logical forms, such as popularized scientific texts, would be a better predictor of deductive reasoning skill than general print exposure, which is not sensitive to the quality of an individual's reading activity. Furthermore, it was predicted that the ability to generate explanatory bridging inferences while reading would also be predictive of syllogistic reasoning. Undergraduate students (N = 112) were tested for vocabulary, nonverbal cognitive ability, exposure to general print, exposure to popularized scientific literature, and the ability to comprehend texts distinguished by the number of inferences that must be generated to support comprehension. Hierarchical multiple regression analyses showed that a combined measure of exposure to general and scientific literature was a significant predictor of syllogistic reasoning ability. Additionally, the ability to comprehend high-inference-load texts was related to solving syllogisms that were inconsistent with world knowledge, indicating an overlap in deductive reasoning skill and text comprehension processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
This paper is concerned with the investigation of the shear effect on the dynamic behavior of a thin microcantilever beam with manufacturing process defects. Unlike the Rayleigh beam model (RBM), the Timoshenko beam model (TBM) takes in consideration the shear effect on the resonance frequency. This effect become significant for thin microcantilever beams with larger slenderness ratios that are normally encountered in MEMS devices such as sensors. The TBM model is presented and analyzed by numerical simulation using Finite Element Method (FEM) to determine corrective factors for the correction of the effect of manufacturing process defects like the underetching at the clamped end of the microbeam and the nonrectangular cross section of the area. A semi-analytical approach is proposed for the extraction of the Young’s modulus from 3D FEM simulation with COMSOL Multiphysics software. This model was tested on measurements of a thin chromium microcantilever beam of dimensions (80 × 2 × 0.95 μm3). Final results indicate that the correction of the effect of manufacturing process defects is significant where the corrected value of Young’s modulus is very close to the experimental results and it is about 280.81 GPa.