首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
Global Sensitivity Analysis (GSA) is an essential technique to support the calibration of environmental models by identifying the influential parameters (screening) and ranking them.In this paper, the widely-used variance-based method (Sobol') and the recently proposed moment-independent PAWN method for GSA are applied to the Soil and Water Assessment Tool (SWAT), and compared in terms of ranking and screening results of 26 SWAT parameters. In order to set a threshold for parameter screening, we propose the use of a “dummy parameter”, which has no influence on the model output. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. We find that Sobol' and PAWN identify the same 12 influential parameters but rank them differently, and discuss how this result may be related to the limitations of the Sobol' method when the output distribution is asymmetric.  相似文献   

2.
Traditional ‘Design of Experiment’ (DOE) approaches focus on minimization of parameter error variance. In this work, we propose a new “decision-oriented” DOE approach that takes into account how the generated data, and subsequently, the model developed based on them will be used in decision making. By doing so, the parameter variances get distributed in a manner such that its adverse impact on the targeted decision making is minimal. Our results show that the new decision-oriented DPE approach significantly outperforms the standard D-optimal design approach. The new design method should be a valuable tool when experiments are conducted for the purpose of making R&D decisions.  相似文献   

3.
In this paper, we study multiparametric sensitivity analysis of the additive model in data envelopment analysis using the concept of maximum volume in the tolerance region. We construct critical regions for simultaneous and independent perturbations in all inputs/outputs of an efficient decision making unit. Necessary and sufficient conditions are derived to classify the perturbation parameters as “focal” and “nonfocal.” Nonfocal parameters can have unlimited variations because of their low sensitivity in practice and these parameters can be deleted from the final analysis. For focal parameters a maximum volume region is characterized. Theoretical results are illustrated with the help of a numerical example.  相似文献   

4.
This work focuses on the performance sensitivities of microwave amplifiers using the “adjoint network and adjoint variable” method, via “wave” approaches, which includes sensitivities of the transducer power gain, noise figure, and magnitudes and phases of the input and output reflection coefficients. The method can be extended to sensitivities of the other performance measure functions. The adjoint‐variable methods for design‐sensitivity analysis offer computational speed and accuracy. They can be used for efficiency‐based gradient optimization, in tolerance and yield analyses. In this work, an arbitrarily configured microwave amplifier is considered: firstly, each element in the network is modeled by the scattering matrix formulation, then the topology of the network is taken into account using the connection scattering‐matrix formulation. The wave approach is utilized in the evaluation of all the performance‐measurement functions, then sensitivity invariants are formulated using Tellegen's theorem. Performance sensitivities of the T‐ and Π‐types of distributed‐parameter amplifiers are considered as a worked example. The numerical results of T‐ and Π‐type amplifiers for the design targets of noise figure Freq = 0.46 dB ? 1,12 and Vireq = 1, GTreq = 12 dB ? 15.86 in the frequency range 2–11 GHz are given in comparison to each other. Furthermore, analytical methods of the “gain factorisation” and “chain sensitivity parameter” are applied to the gain and noise sensitivities as well. In addition, “numerical perturbation” is applied to calculation of all the sensitivities. © 2006 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2006.  相似文献   

5.
Metadata (i.e., data describing about data) of digital objects plays an important role in digital libraries and archives, and thus its quality needs to be maintained well. However, as digital objects evolve over time, their associated metadata evolves as well, causing a consistency issue. Since various functionalities of applications containing digital objects (e.g., digital library, public image repository) are based on metadata, evolving metadata directly affects the quality of such applications. To make matters worse, modern data applications are often large-scale (having millions of digital objects) and are constructed by software agents or crawlers (thus often having automatically populated and erroneous metadata). In such an environment, it is challenging to quickly and accurately identify evolving metadata and fix them (if needed) while applications keep running. Despite the importance and implications of the problem, the conventional solutions have been very limited. Most of existing metadata-related approaches either focus on the model and semantics of metadata, or simply keep authority file of some sort for evolving metadata, and never fully exploit its potential usage from the system point of view. On the other hand, the question that we raise in this paper is “when millions of digital objects and their metadata are given, (1) how to quickly identify evolving metadata in various context? and (2) once the evolving metadata are identified, how to incorporate them into the system?” The significance of this paper is that we investigate scalable algorithmic solution toward the identification of evolving metadata and emphasize the role of “systems” for maintenance, and argue that “systems” must keep track of metadata changes pro-actively, and leverage on the learned knowledge in their various services.  相似文献   

6.
Sensitivity analysis (SA) in spatially explicit agent-based models (ABMs) has emerged to address some of the challenges associated with model specification and parameterization. For spatially explicit ABMs, the comparison of spatial or spatio-temporal patterns has been advocated to evaluate models. Nevertheless, less attention has been paid to understanding the extent to which parameter values in ABMs are responsible for mismatch between model outcomes and observations. In this paper, we propose the use of multiple scale space-time patterns in variance-based global sensitivity analysis (GSA). A vector-borne disease transmission model was used as the case study. Input factors used in GSA include one related to the environment (introduction rates), two related to interactions between agents and environment (level of herd immunity, mosquito population density), and one that defines agent state transition (mosquito extrinsic incubation period). The results show parameters related to interactions between agents and the environment have great impact on the ability of a model to reproduce observed patterns, although the magnitudes of such impacts vary by space-time scales. Additionally, the results highlight the time-dependent sensitivity to parameter values in spatially explicit ABMs. The GSA performed in this study helps in identifying the input factors that need to be carefully parameterized in the model to implement ABMs that well reproduce observed patterns at multiple space-time scales.  相似文献   

7.
Online configuration of large-scale systems such as networks requires parameter optimization within a limited amount of time, especially when configuration is needed as a response to recover from a failure in the system. To quickly configure such systems in an online manner, we propose a Probabilistic Trans-Algorithmic Search (PTAS) framework which leverages multiple optimization search algorithms in an iterative manner. PTAS applies a search algorithm to determine how to best distribute available experiment budget among multiple optimization search algorithms. It allocates an experiment budget to each available search algorithm and observes its performance on the system-at-hand. PTAS then probabilistically reallocates the experiment budget for the next round proportional to each algorithm’s performance relative to the rest of the algorithms. This “roulette wheel” approach probabilistically favors the more successful algorithm in the next round. Following each round, the PTAS framework “transfers” the best result(s) among the individual algorithms, making our framework a trans-algorithmic one. PTAS thus aims to systematize how to “search for the best search” and hybridize a set of search algorithms to attain a better search. We use three individual search algorithms, i.e., Recursive Random Search (RRS) (Ye and Kalyanaraman, 2004), Simulated Annealing (SA) (Laarhoven and Aarts, 1987), and Genetic Algorithm (GA) (Goldberg, 1989), and compare PTAS against the performance of RRS, GA, and SA. We show the performance of PTAS on well-known benchmark objective functions including scenarios where the objective function changes in the middle of the optimization process. To illustrate applicability of our framework to automated network management, we apply PTAS on the problem of optimizing link weights of an intra-domain routing protocol on three different topologies obtained from the Rocketfuel dataset. We also apply PTAS on the problem of optimizing aggregate throughput of a wireless ad hoc network by tuning datarates of traffic sources. Our experiments show that PTAS successfully picks the best performing algorithm, RRS or GA, and allocates the time wisely. Further, our results show that PTAS’ performance is not transient and steadily improves as more time is available for search.  相似文献   

8.
In this article, I focus on the robustness of geometric programs (e.g., Delaunay triangulation, intersection between surfacic or volumetric meshes, Voronoi-based meshing …) w.r.t. numerical degeneracies. Some of these geometric programs require “exotic” predicates, not available in standard libraries (e.g., J.-R. Shewchuk’s implementation and CGAL). I propose a complete methodology and a sample Open Source implementation of a toolset (PCK: Predicate Construction Kit) that makes it reasonably easy to design geometric programs free of numerical errors. The C++ code of the predicates is automatically generated from its formula, written in a simple specification language. Robustness is obtained through a combination of arithmetic filters, expansion arithmetics and symbolic perturbation.As an example of my approach, I give the formulas and PCK source-code for the 4 predicates used to compute the intersection between a 3d Voronoi diagram and a tetrahedral mesh, as well as symbolic perturbations that provably escapes the corner cases. This allows to robustly compute the intersection between a Voronoi diagram and a triangle mesh, or the intersection between a Voronoi diagram and a tetrahedral mesh. Such an algorithm may have several applications, including surface and volume meshing based on Lloyd relaxation.  相似文献   

9.
G. Söderlind 《Computing》1992,49(3):303-314
It is well-known that linear time-varying high-index DAEs can be very sensitive to parametric perturbations, [1, p. 31]. Stability is also affected, as is known from singular perturbation theory. In this note, we show that arbitrarily small and smooth perturbations can cause dramatic instabilities by introducing “small” perturbations of the matrix pencil's generalized eigenvalues atz=∞, leading to large positive finite eigenvalues. The smaller the perturbation, the larger is the instability of the perturbed problem, in contrast to the ODE case. Some high-index problems can thus be considered as marginally stable, with neighboring problems (usually of lower index) exhibiting severe instabilities.  相似文献   

10.
Here we comment on the article, “On the mapping of genotype to phenotype in evolutionary algorithms”, by Peter A. Whigham, Grant Dick, and James Maclaurin. The authors present a critical view on the use of genotype to phenotype mapping in Evolutionary Algorithms, and how the use of this analogy can be detrimental for problem solving. They examine a grammar-based approach to Genetic Programming (GP), Grammatical Evolution (GE), and highlight properties of GE which are detrimental to effective evolutionary search. Rather than use loose analogies and methaphors, we suggest that a focus should be (and has been in GE and other approaches to GP) on addressing one of the most significant open issues in our field, i.e., What are the sufficient set of features in natural, genetic, evolutionary and developmental systems, which can translate into the most effective computational approaches for program synthesis?  相似文献   

11.
Social media influence analysis, sometimes also called authority detection, aims to rank users based on their influence scores in social media. Existing approaches of social influence analysis usually focus on how to develop effective algorithms to quantize users’ influence scores. They rarely consider a person’s expertise levels which are arguably important to influence measures. In this paper, we propose a computational approach to measuring the correlation between expertise and social media influence, and we take a new perspective to understand social media influence by incorporating expertise into influence analysis. We carefully constructed a large dataset of 13,684 Chinese celebrities from Sina Weibo (literally ”Sina microblogging”). We found that there is a strong correlation between expertise levels and social media influence scores. Our analysis gave a good explanation of the phenomenon of “top across-domain influencers”. In addition, different expertise levels showed influence variation patterns: e.g., (1) high-expertise celebrities have stronger influence on the “audience” in their expertise domains; (2) expertise seems to be more important than relevance and participation for social media influence; (3) the audiences of top expertise celebrities are more likely to forward tweets on topics outside the expertise domains from high-expertise celebrities.  相似文献   

12.
Efficient sensitivity analysis, particularly for the global sensitivity analysis (GSA) to identify the most important or sensitive parameters, is crucial for understanding complex hydrological models, e.g., distributed hydrological models. In this paper, we propose an efficient integrated approach that integrates a qualitative screening method (the Morris method) with a quantitative analysis method based on the statistical emulator (variance-based method with the response surface method, named the RSMSobol' method) to reduce the computational burden of GSA for time-consuming models. Using the Huaihe River Basin of China as a case study, the proposed approach is used to analyze the parameter sensitivity of distributed time-variant gain model (DTVGM). First, the Morris screening method is used to qualitatively identify the parameter sensitivity. Subsequently, the statistical emulator using the multivariate adaptive regression spline (MARS) method is chosen as an appropriate surrogate model to quantify the sensitivity indices of the DTVGM. The results reveal that the soil moisture parameter WM is the most sensitive of all the responses of interest. The parameters Kaw and g1 are relatively important for the water balance coefficient (WB) and Nash–Sutcliffe coefficient (NS), while the routing parameter RoughRss is very sensitive for the Nash–Sutcliffe coefficient (NS) and correlation coefficient (RC) response of interest. The results also demonstrate that the proposed approach is much faster than the brute-force approach and is an effective and efficient method due to its low CPU cost and adequate degree of accuracy.  相似文献   

13.
The planning stage in the development of an information system (IS) is important for IS/business alignment. Accordingly, academics and practitioners in both developed and developing countries are concerned about the impact of leadership orientation on strategic IS planning (SISP). The focus of this research is to identify the nature of the relationship between leadership orientations and IS planning approaches in the context of Libyan organizations. To investigate this relationship, a postal survey was conducted to collect data from 117 executives responsible for IS planning. The questionnaire asked about leadership values and SISP approaches using multi-item, multi-scaled questions. The results show that “controlling” and “competing” leadership orientations have a positive direct effect on all SISP approaches. Coordinator leadership orientations exhibited the highest positive association with rational, adaptable, and intuitive SISP approaches. The results of this research will have important implications for Libyan organizations, especially as they attempt to rebuild the country's economy after the Libyan revolution. These implications are discussed in detail in the paper.  相似文献   

14.
ContextGiven the increased interest in using visualization techniques (VTs) to help communicate and understand software architecture (SA) of large scale complex systems, several VTs and tools have been reported to represent architectural elements (such as architecture design, architectural patterns, and architectural design decisions). However, there is no attempt to systematically review and classify the VTs and associated tools reported for SA, and how they have been assessed and applied.ObjectiveThis work aimed at systematically reviewing the literature on software architecture visualization to develop a classification of VTs in SA, analyze the level of reported evidence and the use of different VTs for representing SA in different application domains, and identify the gaps for future research in the area.MethodWe used systematic literature review (SLR) method of the evidence-based software engineering (EBSE) for reviewing the literature on VTs for SA. We used both manual and automatic search strategies for searching the relevant papers published between 1 February 1999 and 1 July 2011.ResultsWe selected 53 papers from the initially retrieved 23,056 articles for data extraction, analysis, and synthesis based on pre-defined inclusion and exclusion criteria. The results from the data analysis enabled us to classify the identified VTs into four types based on the usage popularity: graph-based, notation-based, matrix-based, and metaphor-based VTs. The VTs in SA are mostly used for architecture recovery and architectural evolution activities. We have also identified ten purposes of using VTs in SA. Our results also revealed that VTs in SA have been applied to a wide range of application domains, among which “graphics software” and “distributed system” have received the most attention.ConclusionSA visualization has gained significant importance in understanding and evolving software-intensive systems. However, only a few VTs have been employed in industrial practice. This review has enabled us to identify the following areas for further research and improvement: (i) it is necessary to perform more research on applying visualization techniques in architectural analysis, architectural synthesis, architectural implementation, and architecture reuse activities; (ii) it is essential to pay more attention to use more objective evaluation methods (e.g., controlled experiment) for providing more convincing evidence to support the promised benefits of using VTs in SA; (iii) it is important to conduct industrial surveys for investigating how software architecture practitioners actually employ VTs in architecting process and what are the issues that hinder and prevent them from adopting VTs in SA.  相似文献   

15.
The conformal prediction framework allows for specifying the probability of making incorrect predictions by a user-provided confidence level. In addition to a learning algorithm, the framework requires a real-valued function, called nonconformity measure, to be specified. The nonconformity measure does not affect the error rate, but the resulting efficiency, i.e., the size of output prediction regions, may vary substantially. A recent large-scale empirical evaluation of conformal regression approaches showed that using random forests as the learning algorithm together with a nonconformity measure based on out-of-bag errors normalized using a nearest-neighbor-based difficulty estimate, resulted in state-of-the-art performance with respect to efficiency. However, the nearest-neighbor procedure incurs a significant computational cost. In this study, a more straightforward nonconformity measure is investigated, where the difficulty estimate employed for normalization is based on the variance of the predictions made by the trees in a forest. A large-scale empirical evaluation is presented, showing that both the nearest-neighbor-based and the variance-based measures significantly outperform a standard (non-normalized) nonconformity measure, while no significant difference in efficiency between the two normalized approaches is observed. The evaluation moreover shows that the computational cost of the variance-based measure is several orders of magnitude lower than when employing the nearest-neighbor-based nonconformity measure. The use of out-of-bag instances for calibration does, however, result in nonconformity scores that are distributed differently from those obtained from test instances, questioning the validity of the approach. An adjustment of the variance-based measure is presented, which is shown to be valid and also to have a significant positive effect on the efficiency. For conformal regression forests, the variance-based nonconformity measure is hence a computationally efficient and theoretically well-founded alternative to the nearest-neighbor procedure.  相似文献   

16.
本文研究了广义不确定系统在两种参数摄动( 结构参数摄动和非结构参数摄动)下的无脉冲鲁棒性问题,给出了系统无脉冲鲁棒性的充分 判据.在此基础上,进一步研究了广义不确定系统在结构参数摄动下的大范围脉冲鲁棒控制 问题,在理想系统满足一定条件和系统的结构参数摄动的摄动界任意给定的情况下,提出了 该类控制器的具体设计步骤.最后,通过一个例子说明结论的可行性.  相似文献   

17.
Spatially explicit demographic models are increasingly being used to forecast the effect of global change on the range dynamics of species. These models are typically complex, with the structure and parameter values often estimated with considerable uncertainty. If not properly accounted, this can lead to bias or false precision in projections of changes to species range dynamics and extinction risk. Here we present a new open-source freeware tool, “Sensitivity Analysis of Range Dynamics Models” (SARDM) that provides an all-in-one approach for: (i) determining the implications of integrating complex and often uncertain information into spatially explicit demographic models compiled in RAMAS GIS, and (ii) identifying and ranking the relative importance of different sources of parameter uncertainty. The sensitivity and uncertainty analysis techniques built into SARDM will facilitate ecologists and conservation scientists in better establishing confidence in forecasts of range movement and abundance.  相似文献   

18.
We humans usually think in words; to represent our opinion about, e.g., the size of an object, it is sufficient to pick one of the few (say, five) words used to describe size (“tiny,” “small,” “medium,” etc.). Indicating which of 5 words we have chosen takes 3 bits. However, in the modern computer representations of uncertainty, real numbers are used to represent this “fuzziness.” A real number takes 10 times more memory to store, and therefore, processing a real number takes 10 times longer than it should. Therefore, for the computers to reach the ability of a human brain, Zadeh proposed to represent and process uncertainty in the computer by storing and processing the very words that humans use, without translating them into real numbers (he called this idea granularity). If we try to define operations with words, we run into the following problem: e.g., if we define “tiny” + “tiny” as “tiny,” then we will have to make a counter-intuitive conclusion that the sum of any number of tiny objects is also tiny. If we define “tiny” + “tiny” as “small,” we may be overestimating the size. To overcome this problem, we suggest to use nondeterministic (probabilistic) operations with words. For example, in the above case, “tiny” + “tiny” is, with some probability, equal to “tiny,” and with some other probability, equal to “small.” We also analyze the advantages and disadvantages of this approach: The main advantage is that we now have granularity and we can thus speed up processing uncertainty. The main disadvantage is that in some cases, when defining symmetric associative operations for the set of words, we must give up either symmetry, or associativity. Luckily, this necessity is not always happening: in some cases, we can define symmetric associative operations. © 1997 John Wiley & Sons, Inc.  相似文献   

19.
A software complexity metric is a quantitative measure of the difficulty of comprehending and working with a specific piece of software. The majority of metrics currently in use focus on a program's “microcomplexity.” This refers to how difficult the details of the software are to deal with. This paper proposes a method of measuring the “macrocomplexity,” i.e., how difficult the overall structure of the software is to deal with, as well as the microcomplexity. We evaluate this metric using data obtained during the development of a compiler/environment project, involving over 30,000 lines of C code. The new metric's performance is compared to the performance of several other popular metrics, with mixed results. We then discuss how these metrics, or any other metrics, may be used to help increase the project management efficiency.  相似文献   

20.
The study of the human visual system is very interesting to quantify the quality of an image or to predict perceived information. The contrast sensitivity function is one of the main ways to incorporate the human visual system properties in an imaging system. It characterizes its sensitivity to spatial frequencies. In this paper, we are interested in establishing a pretreatment for existing metrics with full reference (“peak signal‐to‐noise ratio”, “digital video quality”) for the H.264/MPEG‐4 (Motion Picture Expert Group) advanced video coding standard. We realize in our algorithm the FFT transformation to apply the contrast sensitivity function. Our method is applicable to any size of image and video sequence by increasing its size at powers of two. This increase is achieved by adding “mirror image.” We evaluate the performance of the proposed pretreatment by using subjective “LIVE” video databases. The performance metrics, that is, Pearson (PLCC), Spearman correlation coefficients (SROCC) and root mean square prediction error (RMSE) indicate that the proposed method gives a good performance in H264 codec distortions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号