首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Environmental modeling usually involves large numbers of input and output data, especially when the modeling is on a regional or national scale. Analyzing these data can then become a serious problem in itself. The interactive comparative display system (ICDS) provides a means for helping to overcome this difficulty. Once it has been implemented for a certain model, the menu-driven user-interface makes it very easy to select a subset of data and display them on a color screen. Data are retrieved from a hierarchically structured data base. Display can either be in the form of thematic maps (i.e. colorings of subunits that have a fixed geometry) or in the form of pie or bar charts. By using a screen-layout with twin maps and charts the system makes possible the visual comparative analysis of data, thus mobilizing the excellent human capability of comparing two objects.  相似文献   

2.
The transportation of dangerous goods is a complex issue involving various potential consequences for a wide range of high-stake elements. Most particularly, hydrocarbon transportation requires the carrying out of a global study in order to assess the risks involved. The aim of this study is to develop a prediction code for analyzing different possible hydrocarbon supply routes in order to determine whether modifying the flow of hydrocarbon transportation significantly increases the risk (for people, infrastructure and the environment). On the one hand, this paper details the methodology proposed for assessing risk levels using hazard scenarios and the vulnerability of high-stake elements. On the other hand, it presents the modeling tool developed (CARTENJEUX), based on an existing geographical information system (MapInfo), through a case study (Paris, France). Several maps (severity of accident, vulnerability and risk levels) generated using CARTENJEUX are presented in order to illustrate how stakeholders can determine preferential routes at regional scale.  相似文献   

3.
In this paper, the great deluge algorithm (GDA), which has not been previously used in constrained mechanical design optimization problems is employed to solve several design optimization problems selected from the literature. The GDA algorithm needs only one basic parameter to setup, which makes it very attractive for solving optimization problems. First time in this paper, an attempt is made to see whether it is possible to enhance the performance of a very simple algorithm like GDA to solve complex constrained non-linear design optimization problems by embedding chaotic maps in its neighborhood generation mechanism. Eight different chaotic maps are tested and compared in this paper. It is observed that chaotic maps can considerably improve the performance of GDA and enables it to find the best possible solutions for the studied problems.  相似文献   

4.
Reuse is becoming one of the key areas in dealing with the cost and quality of software systems. An important issue is the reliability of the components, hence making certification of software components a critical area. The objective of this article is to try to describe methods that can be used to certify and measure the ability of software components to fulfil the reliability requirements placed on them. A usage modelling technique is presented, which can be used to formulate usage models for components. This technique will make it possible not only to certify the components, but also to certify the system containing the components. The usage model describes the usage from a structural point of view, which is complemented with a profile describing the expected usage in figures. The failure statistics from the usage test form the input of a hypothesis certification model, which makes it possible to certify a specific reliability level with a given degree of confidence. The certification model is the basis for deciding whether the component can be accepted, either for storage as a reusable component or for reuse. It is concluded that the proposed method makes it possible to certify software components, both when developing for and with reuse  相似文献   

5.
资源泄漏是影响软件质量和可靠性的一种重要软件缺陷,存在资源泄漏的程序长时间运行会由于资源耗尽而发生异常甚至崩溃.静态代码分析是进行资源泄漏检测的一种有效的技术手段,能够基于源代码或者二进制代码有效地发现程序中潜在的资源泄漏问题.然而,精确的资源泄漏检测算法的复杂性会随着程序规模的增加呈指数级增长,无法满足生产中即时对缺陷进行分析检测的实际应用需求.面向大规模源代码提出了一种增量式的静态资源泄漏检测方法,该方法支持过程间流敏感的资源泄漏检测,在用户编辑代码的过程中,从变更的函数入手,通过资源闭包、指向分析过滤等多种技术手段缩小资源泄漏检测范围,进而实现了大规模代码的即时缺陷分析与报告.实验结果表明:该方法在保证准确率的前提下,90%的增量检测实验可以在10s内完成,能够满足在用户编辑程序过程中对缺陷进行即时检测和报告的实际应用需求.  相似文献   

6.
7.
Evaluating the quality of entity relationship models   总被引:6,自引:0,他引:6  
Entity Relationship (E-R) models are at the core of logical database design. This paper describes the development of a model, associated metrics and methodology for assessing the quality of an E-R model. The model was developed by investigating into the causal relationships between ontological and behavioural factors influencing data quality. The methodology describes aggregation of the scores on various metrics to calculate an overall quality score for an E-R model, and use of the model to identify problem areas if the individual quality scores on different factors do not meet organizational standards. Further possible improvement of the model and future research issues are also discussed.  相似文献   

8.
Stochastic and non-deterministic influences have an effect on cutting processes and lead to an unsteady and dynamic process behaviour. Concepts for the improvement of process reliability and for the control of tolerances have to be developed in order to fulfil the increasing requirements on product quality. A concept for the improvement of manufacturing accuracy through artificial neural networks (ANN) will be presented as an example for the turning process. This ANN model makes it possible to predict the dimensional deviation caused by tool wear. Feeding this back in an open loop within the machine controller the deviation can be compensated by using an adaptive control of the depth of cut.  相似文献   

9.
刘邦舟  汪斌强  王文博  吴迪 《计算机应用》2016,36(12):3239-3243
针对大规模软件定义网络(SDN)的多控制器部署模型计算复杂度高的问题,定义了控制链路可靠性等多个衡量网络服务质量的指标,并提出一种针对大规模SDN的子域划分及控制器部署方法。首先,该方法利用改进的标签传播算法(LPA)将网络划分成多个子域,然后在子域中分别部署控制器。在考虑控制链路平均时延、可靠性以及控制器负载均衡等多个性能指标的基础上,将问题模型的计算复杂度降低至仅与网络规模呈线性关系。实验结果表明,所提算法与原始的LPA相比,控制器负载均衡性得到明显优化;与容量受限的控制器部署(CCP)算法相比,模型的计算复杂度和网络服务质量得到明显改善:在Internet2拓扑中,控制链路平均时延最多减小9%,控制链路可靠性最多增强10%。  相似文献   

10.
为解决同一地物数据被重复采集而导致的数据二义性问题,综合不同来源数据的点位精度差异的影响,提出一种基于多评价因素的地图要素几何位置调整变换算法。分析确定影响调整变换的三大主要评价因素,通过熵法决定其重要性,将其综合来确定要素的可信度;对于线状、面状要素来说,采用离散Fréchet距离来识别同名要素上的同名点对,使用位置加权平均来获得调整变换后的位置。结合海陆图的部分要素对该算法进行检验的结果表明,其提高了要素的空间位置调整变换质量。  相似文献   

11.
Active snake contours and Kohonen’s self-organizing feature maps (SOMs) are employed for representing and evaluating discrete point maps of indoor environments efficiently and compactly. A generic error criterion is developed for comparing two different sets of points based on the Euclidean distance measure. The point sets can be chosen as (i) two different sets of map points acquired with different mapping techniques or different sensing modalities, (ii) two sets of fitted curve points to maps extracted by different mapping techniques or sensing modalities, or (iii) a set of extracted map points and a set of fitted curve points. The error criterion makes it possible to compare the accuracy of maps obtained with different techniques among themselves, as well as with an absolute reference. Guidelines for selecting and optimizing the parameters of active snake contours and SOMs are provided using uniform sampling of the parameter space and particle swarm optimization (PSO). A demonstrative example from ultrasonic mapping is given based on experimental data and compared with a very accurate laser map, considered an absolute reference. Both techniques can fill the erroneous gaps in discrete point maps. Snake curve fitting results in more accurate maps than SOMs because it is more robust to outliers. The two methods and the error criterion are sufficiently general that they can also be applied to discrete point maps acquired with other mapping techniques and other sensing modalities.  相似文献   

12.
In this paper we propose a method for analysing and visualizing individual maps between shapes, or collections of such maps. Our method is based on isolating and highlighting areas where the maps induce significant distortion of a given measure in a multi‐scale way. Unlike the majority of prior work, which focuses on discovering maps in the context of shape matching, our main focus is on evaluating, analysing and visualizing a given map, and the distortion(s) it introduces, in an efficient and intuitive way. We are motivated primarily by the fact that most existing metrics for map evaluation are quadratic and expensive to compute in practice, and that current map visualization techniques are suitable primarily for global map understanding, and typically do not highlight areas where the map fails to meet certain quality criteria in a multi‐scale way. We propose to address these challenges in a unified way by considering the functional representation of a map, and performing spectral analysis on this representation. In particular, we propose a simple multi‐scale method for map evaluation and visualization, which provides detailed multi‐scale information about the distortion induced by a map, which can be used alongside existing global visualization techniques.  相似文献   

13.
Land-use/cover change (LUCC) has emerged as a crucial component of applied research in remote sensing. This work compares two methodologies, based on two data sources, for assessing amounts of land transformed from open to built space in three regions in Israel. We use a decision-tree methodology to define open and built space from remotely sensed (RS) Landsat data and a geographic information systems (GIS) platform for analysing 1:50 000 scale survey maps. The methodologies are developed independently, used to quantify and characterize the spatial pattern of built space, and then analysed for their strengths and weaknesses. We then develop a method for combining the built area maps derived from each methodology, capitalizing on the strengths of each. The RS methodology had higher omission errors for built space in areas with high vegetation levels and low-density exurban development, but high commission errors in the arid region. The GIS analysis generally had fewer errors, although systematically missed built surfaces that were not specifically buildings or roads, as well as structures intentionally omitted from the maps. We recommend using maps for baseline estimates whenever possible and then complementing the estimates with clusters of built areas identified with the RS methodology. The results of this comparative study are relevant to both researchers and practitioners who need to understand the strengths and weaknesses of mapping techniques they are using.  相似文献   

14.
During software development, many decisions need to be made to guarantee the satisfaction of the stakeholders' requirements and goals. The full satisfaction of all of these requirements and goals may not be possible, requiring decisions over conflicting human interests as well as technological alternatives, with an impact on the quality and cost of the final solution. This work aims at assessing the suitability of multi-criteria decision making (MCDM) methods to support software engineers' decisions. To fulfil this aim, a HAM (Hybrid Assessment Method) is proposed, which gives its user the ability to perceive the influence different decisions may have on the final result. HAM is a simple and efficient method that combines one single pairwise comparison decision matrix (to determine the weights of criteria) with one classical weighted decision matrix (to prioritize the alternatives). To avoid consistency problems regarding the scale and the prioritization method, HAM uses a geometric scale for assessing the criteria and the geometric mean for determining the alternative ratings.  相似文献   

15.
地震灾害潜在损失评价模型的研究是综合应用遥感和地理信息系统技术,估算潜在地震灾区可能的最大损失量,以及潜在震区直接和间接的最大经济损失,为震区灾后的重建提供辅助决策支持信息。  相似文献   

16.
Traditional methods of assessing satellite-derived landcover map accuracy are based on samples. In conditions of inaccessible terrain and lack of up-to-date contextual information, the verification of samples frequently is unfeasible. Such conditions are typical for many applications in developing countries and have been encountered by the authors when mapping the landcover of the region of Manaus in the central Brazilian Amazon Basin. Furthermore, sample-based methods fail to provide information on the spatial distribution of thematic map reliability. This article describes a procedure to derive reliability maps to accompany satellite-derived landcover maps.  相似文献   

17.
陈小禾 《计算机工程》2004,30(24):195-197
对各学科的选择题采用计算机生成试题,在计算机上进行考试,由计算机实现阅卷和数据处理的无纸化考试,用动态链接库(DLL)对试题进行加密和解密,增强了可靠性和灵活性,采用OLE自动化技术将阅卷结果存入Excel工作表作进一步数据分析。  相似文献   

18.
IntroductionThe Visual Ergonomics Risk Assessment Method (VERAM) is a newly developed and validated method to assess visual ergonomics at workplaces. VERAM consists of a questionnaire and an objective evaluation.ObjectiveTo evaluate reliability of VERAM by assessing test-retest reliability of the questionnaire, and intra- and inter-rater reliability of the objective evaluation.MethodsForty-eight trained evaluators used VERAM to evaluate visual ergonomics at 174 workstations. The time interval for test-retest and intra-rater evaluations was 2–3 weeks, and the time interval for inter-rater evaluations was 0–2 days. Test-retest reliability was assessed by intraclass correlation (ICC), the standard error of measurement (SEM) and the smallest detectable change (SDC). Intra- and inter-rater reliability were assessed with weighted kappa coefficients and absolute agreement. Systematic changes were analysed with repeated measures analyses of variance and Wilcoxon sign rank test.ResultsThe ICC of the questionnaire indices ranged from 0.69 to 0.87, while SEM ranged from 7.21 to 10.19 on a scale from 1 to 100, and SDC from 14.42 to 20.37. Intra-rater reliability of objective evaluations ranged from 0.57 to 0.85 (kappa coefficients) and the agreement from 69 to 91%. Inter-rater reliability of objective evaluations ranged from 0.37 to 0.72 (kappa coefficients) and the agreement from 52 to 87%.ConclusionVERAM is a reliable instrument for assessing risks in visual work environments. However, the reliability might increase further by improving the quality of training for evaluators. Complementary evaluations of VERAM's sensitivity to changes in the visual environment are needed.Relevance to industryIt is advantageous to set up a work environment for maximal visual comfort to avoid negative effects on work postures and movements and thus prevent visual- and musculoskeletal symptoms. This method, VERAM, satisfies the need of a valid and reliable tool for determining risks associated with the visual work environment.  相似文献   

19.
In recent years reliability growth models have gained a dominant role in the evaluation of the reliability of software products. However, their application to the quality assurance process of the software of a highly complex system such as a telecommunications switching system often shows that it is then possible to get accurate results only when some months of operation have elapsed. It is then impossible to give an answer to the typical questions which arise during the quality assurance phase: ‘Is the software ready for release? How long will it take before it is ready?’. This paper describes a multi-variable (MV) model which was developed to overcome the above limitation by providing a better model of the quality assurance process. A case study is presented, and a comparison is made between the results obtained by applying existing reliability growth models and the MV-model to it. Besides attaining surprising accuracy in its predictions and in measuring the reliability of the product, the MV-model also tries to tackle a different kind of question: ‘How is it possible to improve the effectiveness of testing?’.  相似文献   

20.
The goal of this study is to present an efficient strategy for reliability analysis of multidisciplinary analysis systems. Existing methods have performed the reliability analysis using nonlinear optimization techniques. This is mainly due to the fact that they directly apply multidisciplinary design optimization (MDO) frameworks to the reliability analysis formulation. Accordingly, the reliability analysis and the multidisciplinary analysis (MDA) are tightly coupled in a single optimizer, which hampers the use of recursive and function-approximation-based reliability analysis methods such as the first-order reliability method (FORM). In order to implement an efficient reliability analysis method for multidisciplinary analysis systems, we propose a new strategy named sequential approach to reliability analysis for multidisciplinary analysis systems (SARAM). In this approach, the reliability analysis and MDA are decomposed and arranged in a sequential manner, making a recursive loop. The key features are as follows. First, by the nature of the recursive loop, it can utilize the efficient advanced first-order reliability method (AFORM). It is known that AFORM converges fast in many cases and requires only the value and the gradient of the limit-state function. Second, the decomposed architecture makes it possible to execute concurrent subsystem analyses for both the reliability analysis and MDA. The concurrent subsystem analyses are conducted by using the global sensitivity equation (GSE). The efficiency of the SARAM method was verified using two illustrative examples taken from the literatures. Compared with existing methods, it showed the least number of subsystem analyses over the other methods while maintaining accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号