首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 17 毫秒
1.
Rapid and accurate estimation of Ground Cover (GC) at regional and global scales for agricultural management application is only possible by using remote sensing (RS). In this study, two Vegetation Indices (VIs) including the Perpendicular Vegetation Index (PVI) and Normalized Difference Vegetation Index (NDVI) were used for estimating GC. Since the parameters of the bare soil line have an important role in calculating GC based on PVI, this line was extracted based on the red-NIRmin (minimum near infrared) method with different intervals (0.0001, 0.0005, and 0.0010). In addition to traditional statistics such as Root Mean Square Error (RMSE), the sensitivity analysis (S) was also used to sharpen the accuracy of the models' estimations. The results indicated that the PVI-based method, in contrast to the NDVI-based approach, had a better performance in estimating GC of wheat. The highest correlation between the observed GC and the estimated GC based on PVI method was achieved in interval length of 0.0005 (R2 = 0.91) with RMSE equal to 8.82. This regression line (GCEST = -3.47 + 0.96 GCOBS) was not significantly different from the 1:1 line. As expected, the best estimation was achieved when the sensitivity of estimated GC based on PVI (length of the interval: 0.0005) was almost constant and low compared to the other models.  相似文献   

2.
The challenge posed by the management of sudden migration of large groups of people lies in the ability to portray and predict the scale and dynamics of such movement accurately. This is further complicated by the fact that associated data pertaining to such migration are largely incomplete or untrustworthy. In view of the shortcomings in respect of modelling instances of conflict and the lack of data related to the associated movement patterns of forcibly displaced individuals, a generic framework is proposed for aiding in the design of agent-based models for simulating conflict instances along with the localised decision-making processes underlying the movement of refugees, undocumented migrants and internally displaced persons fleeing conflict-affected areas. A concept demonstrator is developed based on the framework in an attempt to demonstrate the usefulness and practicability of the framework in the context of conflict-induced forced migration in Syria. The value of such a model lies in the fact that it produces as output the corresponding emergent large-scale migration patterns which may assist in understanding the movement patterns of forcibly displaced people and predicting anticipated destinations of these individuals, and serve as a decision support tool for humanitarian relief.  相似文献   

3.
《Computer Networks》2001,35(1):77-95
Service providers typically define quality of service problems using threshold tests, such as “Are HTTP operations greater than 12 per second on server XYZ?” Herein, we estimate the probability of threshold violations for specific times in the future. We model the threshold metric (e.g., HTTP operations per second) at two levels: (1) non-stationary behavior (as is done in workload forecasting for capacity planning) and (2) stationary, time-serial dependencies. Our approach is assessed using simulation experiments and measurements of a production Web server. For both assessments, the probabilities of threshold violations produced by our approach lie well within two standard deviations of the measured fraction of threshold violations.  相似文献   

4.
Classical expert systems are rule based, depending on predicates expressed over attributes and their values. In the process of building expert systems, the attributes and constants used to interpret their values need to be specified. Standard techniques for doing this are drawn from psychology, for instance, interviewing and protocol analysis. This paper describes a statistical approach to deriving interpreting constants for given attributes. It is also possible to suggest the need for attributes beyond those given.The approach for selecting an interpreting constant is demonstrated by an example. The data to be fitted are first generated by selecting a representative collection of instances of the narrow decision addressed by a rule, then making a judgement for each instance, and defining an initial set of potentially explanatory attributes. A decision rule graph plots the judgements made against pairs of attributes. It reveals rules and key instances directly. It also shows when no rule is possible, thus suggesting the need for additional attributes. A study of a collection of seven rule based models shows that the attributes defined during the fitting process improved the fit of the final models to the judgements by twenty percent over models built with only the initial attributes.  相似文献   

5.
Context-aware computing is a paradigm for governing the numerous mobile devices surrounding us. In this computing paradigm, software applications continuously and dynamically adapt to different “contexts” implying different software configurations of such devices. Unfortunately, modelling a context-aware application (CAA) for all possible contexts is only feasible in the simplest of cases. Hence, tool support verifying certain properties is required. In this article, we introduce the CAA model, in which context adaptations are specified explicitly as model transformations. By mapping this model to graphs and graph transformations, we can exploit graph transformation techniques such as critical pair analysis to find contexts for which the resulting application model is ambiguous. We validate our approach by means of an example of a mobile city guide, demonstrating that we can identify subtle context interactions that might go unnoticed otherwise.  相似文献   

6.
A new fuzzy cover approach to clustering   总被引:1,自引:0,他引:1  
This paper presents a new fuzzy cover-based clustering algorithm. In the proposed algorithm, the concept of fuzzy cover and objective function are employed to identify holding points in the dataset, and we associate these holding points together to build up the backbones of the final clusters. Three specific objectives underlie the presentation of the proposed approach in this paper. The first is to describe mathematical formulation of the fuzzy covers, and the second is to summarize the detailed procedure of constructing fuzzy covers and splicing them into clusters. The third goal is to demonstrate that this approach is able to find out reasonable representative patterns in the final clusters. We illustrate this approach with four examples in order to verify the clustering effectiveness.  相似文献   

7.
A parametric statistical approach to the industrial actuator fault-detection and isolation benchmark is presented. An algorithm for detecting a change in the dynamics of a linear system is formulated as a set of sequential probability ratio tests of the innovations from a bank of Kalman filters. The algorithm is extended to allow estimation of a disturbance using a generalised likelihood ratio test. Modifications are proposed for when the model is nonlinear and the modeling error is significant. The algorithm is evaluated using the benchmark test data and is shown to provide low detection delays while being robust to noise, disturbances and model error.  相似文献   

8.
Lexical stress is primarily important to generate a correct pronunciation of words in many languages; hence its correct placement is a major task in prosody prediction and generation for high-quality TTS (text-to-speech) synthesis systems. This paper proposes a statistical approach to lexical stress assignment for TTS synthesis in Romanian. The method is essentially based on n-gram language models at character level, and uses a modified Katz backoff smoothing technique to solve the problem of data sparseness during training. Monosyllabic words are considered as not carrying stress, and are separated by an automatic syllabification algorithm. A maximum accuracy of 99.11% was obtained on a test corpus of about 47,000 words.  相似文献   

9.
In this paper, we present a new approach and a novel interface, Virtual Human Sketcher (VHS), which enables those who can draw, to sketch-out various human body models. Our approach supports freehand drawing input and a “Stick Figure→Fleshing-out→Skin Mapping” modelling pipeline. Following this pipeline, a stick figure is drawn first to illustrate a figure pose, which is automatically reconstructed into 3D through a “Multi-layered Back-Front Ambiguity Clarifier”. It is then fleshed-out with freehand body contours. A “Creative Model-based Method” is developed for interpreting the body size, shape, and fat distribution of the sketched figure and transferring it into a 3D human body through graphical comparisons and generic model morphing. The generic model is encapsulated with three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially through rigid morphing, fatness morphing, and surface matching to match the 2D figure sketch. The initial resulting 3D body model can be incrementally modified through sketching directly on the 3D model. In addition, this body surface can be mapped onto a series of posed stick figures to be interpolated as a 3D character animation. VHS has been tested by various users on Tablet PC. After minimal training, even a beginner can create plausible human bodies and animate them within minutes.  相似文献   

10.
A practical approach to spectral volume rendering   总被引:1,自引:0,他引:1  
To make a spectral representation of color practicable for volume rendering, a new low-dimensional subspace method is used to act as the carrier of spectral information. With that model, spectral light material interaction can be integrated into existing volume rendering methods at almost no penalty. In addition, slow rendering methods can profit from the new technique of postillumination-generating spectral images in real-time for arbitrary light spectra under a fixed viewpoint. Thus, the capability of spectral rendering to create distinct impressions of a scene under different lighting conditions is established as a method of real-time interaction. Although we use an achromatic opacity in our rendering, we show how spectral rendering permits different data set features to be emphasized or hidden as long as they have not been entirely obscured. The use of postillumination is an order of magnitude faster than changing the transfer function and repeating the projection step. To put the user in control of the spectral visualization, we devise a new widget, a "light-dial", for interactively changing the illumination and include a usability study of this new light space exploration tool. Applied to spectral transfer functions, different lights bring out or hide specific qualities of the data. In conjunction with postillumination, this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets  相似文献   

11.
Flow experience, the degree to which a person feels involved in a particular activity, is an important influence on human–computer interaction. Building on Guo and Poole’s (2009) model of flow experience in Web navigation, and van Schaik and Ling's (in press) cognitive-experiential approach to modelling interaction experience, this research demonstrates the crucial role of the preconditions of flow experience in human–computer interaction. In an experiment, the preconditions of flow experience – but not flow experience proper – mediated the effects of artefact complexity, task complexity and intrinsic motivation (as a situation-specific trait) on both flow and task outcome. However, preconditions did not predict overall artefact evaluation. Within a staged model of flow experience, the broader implications of this work for human–computer interaction are explored.  相似文献   

12.
Failure diagnosis is one of the key challenges of Service oriented Architectures. One of the methods of identifying occurrences of failure is to use Diagnosers; software modules or services are deployed with the system to monitor the interaction between services for identifying whether a failure has happened or may have happened. This paper aims to present a suitable modelling framework to allow automated creation of Diagnosers based on Discrete Event System (DES) theory. Coming up with an appropriate modelling language framework is a prerequisite to applying DES techniques. Modelling languages popular in DES, such as Petri nets and automata, despite being sufficiently adequate for modelling, are not well adopted by the SoA community. Inspired by Petri nets and Workflow Graph, the modelling suggested in this paper closely follows BPEL that is widely used by the community. In particular, our language includes constructs that are supported by major tool vendors. To demonstrate that the suggested formal language is a suitable basis for the application of DES theory, we have extended one of the existing DES methods for the creation of centralised Diagnoser. Two algorithms for creating Diagnosers are put forward. These algorithms are applied into the models that are abstracted from the BPEL representation of the involving services. As a proof of concept, an implementation of the suggested approach is created as an Oracle JDeveloper plugin that automatically produces new Diagnosing services and integrates them to work with existing services. The paper ends with a series of empirical results on the performance-related aspects of the proposed method.  相似文献   

13.
Abstract

In order to obtain a model equation for the calculation of percentage plant cover by multi-spectral radiances remotely-sensed by satellites, a regression procedure is used to connect space remote-sensing data to ground plant cover measurement. A traditional linear regression model using the normalized difference vegetation index (NDVI) is examined by remote-sensing data of the SPOT satellite and ground measurement of LCTA project for a test site at Hohenfels. Germany. A relaxation vegetation index (RVI) is proposed in a non-linear regression modelling to replace the NDVI in linear regression modelling to get a better calculation of percentage plant cover. The definition of the RVI is

where X i is raw remote-sensing data in channel i. Using the RVI, the correlation coefficient between calculated and observed percentage plant cover for a test scene in 1989 reaches 0·9 while for the NDVI it is only 0·7; the coefficient of multiple determination R 2 reaches 0·8 for the RVI while it is only 0·5 for the NDVI. Numerical testing shows that the ability of using the RVI to predict percentage plant cover by space remote-sensing data for the same scene or the scene in other years is much stronger than the NDVI.  相似文献   

14.
《国际计算机数学杂志》2012,89(5-6):511-523
Due to having the minimax property, Chebyshev polynomials are used today to economize the arbitrary polynomial functions. In this work, we present a statistical approach to show that, contrary to current thought, the Chebyshev polynomials of the first kind are not appropriate for economizing these polynomials if one uses this statistical approach. In this way, a numerical results section is also given to clearly prove our claim.  相似文献   

15.
Abstract

This paper deals with a method for mapping the apparent ground brightness on a pixel basis. It makes use of geostationary satellite visible data and generalizes the earlier work of Cano (1982). The detection of clouds larger than one pixel is performed in a time series by comparing the cloud-induced sensor response to the signal which would occur if the pixel were cloud-free by means of an iterative and adaptive filtering. To illustrate the method, Meteosat data either received by means of a WEFAX-type receiver connected to a personal computer or provided by ESOC have been processed. Maps of apparent ground brightness are presented for Europe and Africa with 5?km resolution.  相似文献   

16.
Quality assessment plays a crucial role in data analysis. In this paper, we present a reduced-reference approach to volume data quality assessment. Our algorithm extracts important statistical information from the original data in the wavelet domain. Using the extracted information as feature and predefined distance functions, we are able to identify and quantify the quality loss in the reduced or distorted version of data, eliminating the need to access the original data. Our feature representation is naturally organized in the form of multiple scales, which facilitates quality evaluation of data with different resolutions. The feature can be effectively compressed in size. We have experimented with our algorithm on scientific and medical data sets of various sizes and characteristics. Our results show that the size of the feature does not increase in proportion to the size of original data. This ensures the scalability of our algorithm and makes it very applicable for quality assessment of large-scale data sets. Additionally, the feature could be used to repair the reduced or distorted data for quality improvement. Finally, our approach can be treated as a new way to evaluate the uncertainty introduced by different versions of data.  相似文献   

17.
In this study, a multi-scale phase based sparse disparity algorithm and a probabilistic model for matching uncertain phase are proposed. The features used are oriented edges extracted using steerable filters. Feature correspondences are estimated using phase-similarity at multiple scale using a magnitude weighting scheme. In order to achieve sub-pixel accuracy in disparity, we use a fine tuning procedure which employs the phase difference between corresponding feature points. We also derive a probabilistic model, where phase uncertainty is trained using data from a single image pair. The model is used to provide stable matches. The disparity algorithm and the probabilistic phase uncertainty model are verified on various stereo image pairs.  相似文献   

18.
Proposes a statistical approach to the formal synthesis and improvement of inspection checklists. The approach is based on defect causal analysis and defect modeling. The defect model is developed using IBM's Orthogonal Defect Classification. A case study describes the steps required and a tool for the implementation. The advantages and disadvantages of both empirical and statistical methods are discussed and compared. It is suggested that a statistical approach should be used in conjunction with the empirical approach. The main advantage of the proposed technique is that it allows us to tune a checklist according to the most recent project experience and to identify optimal checklist items even when a source document does not exist  相似文献   

19.
In this paper we consider a statistical approach to augment a limited database of groundtruth documents for use in evaluation of optical character recognition software. A modified moving-blocks bootstrap procedure is used to construct surrogate documents for this purpose which prove to serve effectively and, in some regards, indistinguishably from groundtruth. The proposed method is validated through a rigorous statistical procedure. Received: March 30, 2000 / Revised: September 14, 2001  相似文献   

20.
GAMS (General Algebraic Modelling System) is a software package which will make numerical optimization a much simpler endeavor for specialists from a wide range of disciplines. The need for writing personal fortran codes for optimizers and unfriendly fixed-formatfortran type input files is no longer necessary. This review gives an overall view of GAMS' capabilities and uses in the field of numerical optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号