首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Computers & Geosciences》2006,32(9):1320-1333
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (reml) will usually be preferable.The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the reml analysis is listed in the paper.  相似文献   

2.
A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.  相似文献   

3.
Optimal design of test-inputs and sampling intervals in experiments for linear system identification is treated as a nonlinear integer optimization problem. The criterion is a function of the Fisher information matrix, the inverse of which gives a lower bound for the covariance matrix of the parameter estimates. Emphasis is placed on optimum design of nonuniform data sampling intervals when experimental constraints allow only a limited number of discrete-time measurements of the output. A solution algorithm based on a steepest descent strategy is developed and applied to the design of a biologic experiment for estimating the parameters of a model of the dynamics of thyroid hormone metabolism. The effects on parameter accuracy of different model representations are demonstrated numerically, a canonical representation yielding far poorer accuracies than the original process model for nonoptimal sampling schedules, but comparable accuracies when these schedules are optimized. Several objective functions for optimization are compared. The overall results indicate that sampling schedule optimization is a very fruitful approach to maximizing expected parameter estimation accuracies when the sample size is small.  相似文献   

4.
A major component of the Joint Research Centre's TREES-II project is the assessment of deforestation rates in moist tropical regions for the period 1992 to 1997 using a statistical sample of fine spatial resolution satellite image pairs. It is widely recognized that spatial stratification can reduce the variance of estimates in spatial sampling designs. However, at the pan-tropical scale little reliable spatial information is available to stratify on the basis of deforestation rates. This paper describes a novel sampling scheme for assessing tropical deforestation rates. Stratification is performed using percentages of forest area (derived from National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) data) and areas of known deforestation activity (elucidated from expert consultation) estimated for each sampling unit. Sample site selection is performed by using a sample frame based on a tessellation of hexagons on a sphere. This approach allows for a sensor-independent sample from which unbiased estimators and error variance may be computed. The scheme is currently in the implementation phase for the tropical belt, but can be extended to the global scale.  相似文献   

5.
To improve image quality in computer graphics, antialiazing techniques such as supersampling and multisampling are used. We explore a family of inexpensive sampling schemes that cost as little as 1.25 samples per pixel and up to 2.0 samples per pixel. By placing sample points in the corners or on the edges of the pixels, sharing can occur between pixels, and this makes it possible to create inexpensive sampling schemes. Using an evaluation and optimization framework, we present optimized sampling patterns costing 1.25, 1.5, 1.75 and 2.0 samples per pixel.  相似文献   

6.
Low spatial resolution satellite sensors provide information over relatively large targets with typical pixel resolutions of hundreds of km2. However, the spatial scales of ground measurements are usually much smaller. Such differences in spatial scales makes the interpretation of comparisons between quantities derived from low resolution sensors and ground measurements particularly difficult. It also highlights the importance of developing appropriate sampling strategies when designing ground campaigns for validation studies of low resolution sensors.

We make use of statistical modelling of high resolution surface shortwave radiation budget (SSRB) data to look into this problem. A spatial model that describes the SSRB over a selected region is proposed, and the impact of different sampling schemes in the performance of the model is analysed. Both systematic and random sampling schemes can efficiently represent the full observations set.  相似文献   

7.
Choosing a sampling design for assessing thematic map accuracy requires the strength of a sampling design to be matched to the objectives and resources available for the accuracy assessment. The criteria to consider when planning the sampling design are that the sample should: (1) satisfy probability sampling protocol; (2) be simple to implement and analyse; (3) result in low variance for the key estimates of the assessment; (4) permit adequate variance estimation; (5) be spatially well distributed; and (6) be cost effective. Several basic probability sampling designs useful for accuracy assessment are reviewed, and recommendations are provided to guide the selection of an appropriate design.  相似文献   

8.
Efficient use of iterative solvers in nested topology optimization   总被引:3,自引:3,他引:0  
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the analysis equations. In this study, it is suggested to reduce this computational cost by using an approximation to the solution of the analysis problem, generated by a Krylov subspace iterative solver. By choosing convergence criteria for the iterative solver that are strongly related to the optimization objective and to the design sensitivities, it is possible to terminate the iterative solution of the nested equations earlier compared to traditional convergence measures. The approximation is computationally shown to be sufficiently accurate for the purpose of optimization though the nested equation system is not necessarily solved accurately. The approach is tested on several large-scale topology optimization problems, including minimum compliance problems and compliant mechanism design problems. The optimized designs are practically identical while the time spent on the analysis is reduced significantly.  相似文献   

9.
Understanding the factors that influence the performance of classifications over urban areas is of considerable importance to applications of remote-sensing-derived products in urban design and planning. We examined the impact of training sample selection on a binary classification of urban and nonurban for the Denver, Colorado, metropolitan area. Complete coverage reference data for urban and nonurban cover were available for the year 1997, which allowed us to examine variability in accuracy of the classification over multiple repetitions of the training sample selection and classification process. Four sampling designs for selecting training data were evaluated. These designs represented two options for stratification (spatial and class-specific) and two options for sample allocation (proportional to area and equal allocation). The binary urban and nonurban classification was obtained by employing a decision tree classifier with Landsat imagery. The decision tree classifier was applied to 1000 training samples selected by each of the four training data sampling designs, and accuracy for each classification was derived using the complete coverage reference data. The allocation of sample size to the two classes had a greater effect on classifier performance than the spatial distribution of the training data. The choice of proportional or equal allocation depends on which accuracy objectives have higher priority for a given application. For example, proportionally allocating the training sample to urban and nonurban classes favoured user’s accuracy of urban whereas equally allocating the training sample to the two classes favoured producer’s accuracy of urban. Although this study focused on urban and nonurban classes, the results and conclusions likely generalize to any binary classification in which the two classes represent disproportionate areas.  相似文献   

10.
Optimal performance of vehicle occupant restraint system (ORS) requires an accurate assessment of occupant injury values including head, neck and chest responses, etc. To provide a feasible framework for incorporating occupant injury characteristics into the ORS design schemes, this paper presents a reliability-based robust approach for the development of the ORS. The uncertainties of design variables are addressed and the general formulations of reliable and robust design are given in the optimization process. The ORS optimization is a highly nonlinear and large scale problem. In order to save the computational cost, an optimal sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). Further, to efficiently obtain a robust approximation, the support vector regression (SVR) is suggested to construct the surrogate model in the vehicle ORS design process. The multiobjective particle swarm optimization (MPSO) algorithm is used for obtaining the Pareto optimal set with emphasis on resolving conflicting requirements from some of the objectives and the Monte Carlo simulation (MCS) method is applied to perform the reliability and robustness analysis. The differences of three different Pareto fronts of the deterministic, reliable and robust multiobjective optimization designs are compared and analyzed in this study. Finally, the reliability-based robust optimization result is verified by using sled system test. The result shows that the proposed reliability-based robust optimization design is efficient in solving ORS design optimization problems.  相似文献   

11.
Solution procedures in structural optimization are commonly based on a nested approach where approximations of the analysis and design problems are solved alternately in an iterative scheme. In this paper, we study a simultaneous approach based on an integrated formulation of the analysis and design problems. An advantage of the simultaneous approach, when compared to the nested one, is that the dependence between the analysis and design variables is imposed explicitly. In the nested approach, this dependence is implicitly determined through the solution of the analysis problem. Earlier simultaneous approaches mostly utilize various penalty function reformulations. In this paper, we make use of two augmented Lagrangian schemes, which avoid the numerical ill-conditioning inherent in penalty reformulations. These schemes give rise to Lagrangian subproblems with somewhat different properties, and two efficient techniques are adapted for their solution. The first is a projected Newton method, and the second is a simplicial decomposition scheme. Computational results for bar-truss structures show that the proposed schemes are viable approaches for solving the integrated formulation, and that they are promising for future developments.  相似文献   

12.
In the field of design of computer experiments (DoCE), Latin hypercube designs are frequently used for the approximation and optimization of black-boxes. In certain situations, we need a special type of designs consisting of two separate designs, one being a subset of the other. These nested designs can be used to deal with training and test sets, models with different levels of accuracy, linking parameters, and sequential evaluations. In this paper, we construct nested maximin Latin hypercube designs for up to ten dimensions. We show that different types of grids should be considered when constructing nested designs and discuss how to determine which grid to use for a specific application. To determine nested maximin designs for dimensions higher than two, four variants of the ESE algorithm of Jin et al. (J Stat Plan Inference 134(1):268–287, 2005) are introduced and compared. Our main focus is on GROUPRAND, the most successful of these four variants. In the numerical comparison, we consider the calculation times, space-fillingness of the obtained designs and the performance of different grids. Maximin distances for different numbers of points are provided; the corresponding nested maximin designs can be found on the website .  相似文献   

13.
In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.  相似文献   

14.
Full‐wave electromagnetic (EM) simulation models are ubiquitous in carrying out design closure of antenna structures. Yet, EM‐based design is expensive due to a large number of analyses necessary to yield an optimized design. Computational savings can be achieved using, for example, adjoint sensitivities, surrogate‐assisted procedures, design space dimensionality reduction, or similar sophisticated means. In this article, a simple modification of a rudimentary trust‐region‐embedded gradient search with numerical derivatives is proposed for reduced‐cost optimization of input characteristics of wideband antennas. The approach exploits information and history of relative changes of the design (as compared with the trust region size) during algorithm iterations to control the updates of components of the antenna response Jacobian, specifically, to execute them only if necessary. It is demonstrated that the proposed framework may lead to over 50% savings over the reference algorithm with only minor degradation of the design quality, specifically, up to 0.3 dB (or <3%). Numerical results are supported by experimental validation of the optimized antenna designs. The presented algorithm can be utilized as a stand‐alone optimization routine or as a building block of surrogate‐assisted procedures.  相似文献   

15.
All stationary experimental conditions corresponding to a discrete-time linear time-invariant causal internally stable closed loop with real rational system and feedback controller are characterized using the Youla-Kucera parametrization. Finite dimensional parametrizations of the input spectrum and the Youla-Kucera parameter allow a wide range of closed loop experiment design problems, based on the asymptotic (in the sample size) covariance matrix for the estimated parameters, to be recast as computationally tractable convex optimization problems such as semi-definite programs. In particular, for Box-Jenkins models, a finite dimensional parametrization is provided which is able to generate all possible asymptotic covariance matrices. As a special case, the very common situation of a fixed controller during the identification experiment can be handled and optimal reference signal spectra can be computed subject to closed loop signal constraints. Finally, a brief numerical comparison with closed loop experiment design based on a high model order variance expression is presented.  相似文献   

16.
This study compared aspatial and spatial methods of using remote sensing and field data to predict maximum growing season leaf area index (LAI) maps in a boreal forest in Manitoba, Canada. The methods tested were orthogonal regression analysis (reduced major axis, RMA) and two geostatistical techniques: kriging with an external drift (KED) and sequential Gaussian conditional simulation (SGCS). Deterministic methods such as RMA and KED provide a single predicted map with either aspatial (e.g., standard error, in regression techniques) or limited spatial (e.g., KED variance) assessments of errors, respectively. In contrast, SGCS takes a probabilistic approach, where simulated values are conditional on the sample values and preserve the sample statistics. In this application, canonical indices were used to maximize the ability of Landsat ETM+ spectral data to account for LAI variability measured in the field through a spatially nested sampling design. As expected based on theory, SGCS did the best job preserving the distribution of measured LAI values. In terms of spatial pattern, SGCS preserved the anisotropy observed in semivariograms of measured LAI, while KED reduced anisotropy and lowered global variance (i.e., lower sill), also consistent with theory. The conditional variance of multiple SGCS realizations provided a useful visual and quantitative measure of spatial uncertainty. For applications requiring spatial prediction methods, we concluded KED is more useful if local accuracy is important, but SGCS is better for indicating global pattern. Predicting LAI from satellite data using geostatistical methods requires a distribution and density of primary, reference LAI measurements that are impractical to obtain. For regional NPP modeling with coarse resolution inputs, the aspatial RMA regression method is the most practical option.  相似文献   

17.
This paper presents topology optimization for the design of flow fields in vanadium redox flow batteries (VRFBs), which are large-scale storage systems for renewable energy resources such as solar and wind power. It is widely known that, in recent VRFB systems, one of the key factors in boosting charging or discharging efficiency is the design of the flow field around carbon fiber electrodes and in flow channels. In this study, topology optimization is applied in order to achieve optimized flow field designs. The optimization problem is formulated as a maximization problem for the generation rate of the vanadium species governed by a simplified electrochemical reaction model. A typical porous model is incorporated into the optimization problem for expressing the carbon fiber electrode; furthermore, a mass transfer coefficient that depends on local velocity is introduced. We investigate the dependencies of the optimized configuration with respect to the porosity of the porous electrode and the pressure loss. Results indicate that patterns of interdigitated flow fields are valid designs for VRFBs.  相似文献   

18.
Results of an investigation of the characteristic estimator properties for periodically correlated time series obtained on the basis of finite data length are given. The formulae for the bias and variance of the estimators for mean and covariance function Fourier coefficients are found. The conditions for the choice of sampling interval value, for which aliasing effects do not appear, are obtained. The interpolation formulae for the mean and covariance function estimates are derived. The dependencies of the statistical characteristics of the estimators on sampling interval and sample size for modulated signals are analyzed.  相似文献   

19.
A feedback scheduler for real-time controller tasks   总被引:8,自引:0,他引:8  
The problem studied in this paper is how to distribute computing resources over a set of real-time control loops in order to optimize the total control performance. Two subproblems are investigated: how the control performance depends on the sampling interval, and how a recursive resource allocation optimization routine can be designed. Linear quadratic cost functions are used as performance indicators. Expressions for calculating their dependence on the sampling interval are given. An optimization routine, called a feedback scheduler, that uses these expressions is designed.  相似文献   

20.
Optimal field sampling for targeting minerals using hyperspectral data   总被引:2,自引:0,他引:2  
This paper presents a statistical method for deriving optimal spatial sampling schemes. It focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF) classification techniques were applied to obtain rule mineral images. Each pixel in these rule images represents the similarity between the corresponding pixel in the hyperspectral image to a reference spectrum. The rule images provide weights that are utilized in objective functions of the sampling schemes which are optimized through a process of simulated annealing. A HyMAP 126-channel airborne hyperspectral data acquired in 2003 over the Rodalquilar area in Spain serves as an application to target those pixels with the highest likelihood of occurrence of a specific mineral and as a collection the location of these sampling points selected represent the distribution of that particular mineral. In this area, alunite being a predominant mineral in the alteration zones was chosen as the target mineral. Three weight functions are defined to intensively sample areas where a high probability and abundance of alunite occurs. Weight function I uses binary weights derived from the SAM classification image, leading to an even distribution of sampling points over the region of interest. Weight function II uses scaled weights derived from the SAM rule image. Sample points are arranged more intensely in areas of abundance of alunite. Weight function III combines information from several different rule image classifications. Sampling points are distributed more intensely in regions of high probable alunite as classified by both SAM and SFF, thus representing the purest of pixels. This method leads to an efficient distribution of sample points, on the basis of a user-defined objective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号