首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Peter M. Maurer 《Software》2005,35(8):787-797
A binary component is a separately compiled program that can be used as a part of a larger program. Binary components generally conform to an accepted technology such as JavaBeans or ActiveX, and generally support a rich program interface containing properties, methods and events. Binary components are generally used in a graphical user interface (GUI) environment. There are a number of benefits to be realized by converting command‐line software into binary components. The most important of these is that GUI environments are more popular and more familiar to most people than command‐line environments. Using binary components can greatly simplify a GUI implementation, to the point where it is only slightly more complicated than a typical command‐line implementation. However there are benefits that go beyond mere convenience. Binary components have much richer interfaces than command‐line programs. Binary components are service‐oriented rather than task‐oriented. A task‐oriented program has a main routine that is devoted to accomplishing a single task. A service‐oriented component has no main routine or main function, but instead provides a variety of services to its clients. Binary components can be easily integrated with one another, which permits a design where each major feature of an application is implemented in a different component. Such a design encourages software reuse at the component level and facilitates low‐impact feature upgrades. We first delineate a design‐pattern‐based methodology for converting command‐line programs into components. We then illustrate these principles using two projects, a simulation system for digital circuits, and a data generation system for software and hardware testing. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, a multiple objective ‘Hybrid Co-evolution based Particle Swarm Optimisation’ methodology (HCPSO) is proposed. This methodology is able to handle multiple objective optimisation problems in the area of ship design, where the simultaneous optimisation of several conflicting objectives is considered. The proposed method is a hybrid technique that merges the features of co-evolution and Nash equilibrium with a ε-disturbance technique to eliminate the stagnation. The method also offers a way to identify an efficient set of Pareto (conflicting) designs and to select a preferred solution amongst these designs. The combination of co-evolution approach and Nash-optima contributes to HCPSO by utilising faster search and evolution characteristics. The design search is performed within a multi-agent design framework to facilitate distributed synchronous cooperation. The most widely used test functions from the formal literature of multiple objectives optimisation are utilised to test the HCPSO. In addition, a real case study, the internal subdivision problem of a ROPAX vessel, is provided to exemplify the applicability of the developed method.  相似文献   

3.
测试用例自动生成是软件测试自动化中最为关键的组成部分之一,符号执行作为一种程序分析方法,以其可提供高覆盖率测试用例的优势被广泛应用其中,但路径爆炸和约束求解问题很大程度制约了符号执行技术在现实程序分析中的应用。将研究粒度由语句提升至函数,利用抽象语法树和字节码序列提取到的函数关键信息和控制信息得到函数调用关系模型,设计算法生成函数调用路径(函数调用路径表示程序从开始到结束之间函数的调用或执行序列)。该方法不仅减少了测试路径数目缓解了路径爆炸问题,还有效解决了控制条件中存在函数导致符号表达式难求解的问题。实验结果表明该方法可优化测试路径集,在不降低覆盖率的前提下减少测试用例数量。  相似文献   

4.
Clustering forms one of the most visible conceptual and algorithmic framework of developing information granules. In spite of the algorithm being used, the representation of information granules-clusters is predominantly numeric (coming in the form of prototypes, partition matrices, dendrograms, etc.). In this paper, we consider a concept of granular prototypes that generalizes the numeric representation of the clusters and, in this way, helps capture more details about the data structure. By invoking the granulation-degranulation scheme, we design granular prototypes being reflective of the structure of data to a higher extent than the representation that is provided by their numeric counterparts (prototypes). The design is formulated as an optimization problem, which is guided by the coverage criterion, meaning that we maximize the number of data for which their granular realization includes the original data. The granularity of the prototypes themselves is treated as an important design asset; hence, its allocation to the individual prototypes is optimized so that the coverage criterion becomes maximized. With this regard, several schemes of optimal allocation of information granularity are investigated, where interval-valued prototypes are formed around the already produced numeric representatives. Experimental studies are provided in which the design of granular prototypes of interval format is discussed and characterized.  相似文献   

5.
基于遥感和GIS的人口数据空间化研究进展及案例分析   总被引:9,自引:0,他引:9  
人口数据对于全球、洲际、区域尺度的人与环境交互作用研究的重要性已经被广泛地认知。但基于行政单位的人口统计数据难以与基于自然单位的环境数据相匹配,必须通过建模对人口数据进行空间化分布。人口数据空间化建模的基本思路是将人口数据和地球表面的地理因子关联起来,遥感和地理信息系统提供了有效的工具。介绍了国内外基于遥感和GIS的人口空间化研究的主要项目和方法思路。以黑河流域为研究区,在流域尺度上把典型的人口估计结果GPW、UNEP/GRID、LandScan和中国1 km格网的人口资料与政府人口统计数据进行比较分析,可以看出国内外的研究机构和研究人员已做了大量的相关工作。国外研究主要包括从遥感解译信息反演人口数据、从DMSP-OLS夜间灯光数据反演人口数据和从遥感获取的光谱特征直接反演人口数据。国内研究尽管起步较晚但发展很快,主要是根据土地利用数据和其它地理因子(如高程、道路、居民区等)建立回归模型。黑河流域人口空间分布的比较结果表明中国1 km格网人口数据是几种数据中与实践情况最相符合的。  相似文献   

6.
7.
VFP是一种被十分广泛应用于数据库的相关管理系统,其支持与提供面向过程中的相关程序设计。通过借助VFP的对象模型,进行面向对象程序设计的继承性、封装性、多态性以及子类等等方面的功能应用。VFP有着强大的数据处理功能、友好的图形用户界面以及丰富的可视化设计工具等特点,通过对其有效地进行技巧使用,实现其功能发挥的最优化。  相似文献   

8.
The design of pharmacokinetic and pharmacodynamic experiments concerns a number of issues, among which are the number of observations and the times when they are taken. Often a model is used to describe these data and the pharmacokinetic-pharmacodynamic behavior of a drug. Knowledge of the data analysis model at the design stage is beneficial for collecting patient data for parameter estimation. A number of criteria for model-oriented experiments, which maximize the information content of the data, are available. In this paper we present a program, Popdes, to investigate the D-optimal design of individual and population multivariate response models, such as pharmacokinetic-pharmacodynamic, physiologically based pharmacokinetic, and parent drug and metabolites models. A pre-clinical and clinical pharmacokinetic-pharmacodynamic model describing the concentration-time profile and effect of an oncology compound in development is used for illustration.  相似文献   

9.
This paper proposes a practical methodology for the problem of designing a metro configuration under two criteria: population coverage and construction cost. It is assumed that a set of corridors defining a rough a priori geometric configuration is provided by the planners. The proposed algorithm consists of fine tuning the location of single alignments within each corridor. This is achieved by means of a bicriteria methodology that generates sets of non-dominated paths. These alignments are then combined to form a metro network by solving a bicriteria integer linear program. Extensive computational experiments confirm the efficiency of the proposed methodology.  相似文献   

10.
The results of a case study in which over 100,000 Pascal program executions were monitored for run-time errors are reported. A large number of run-time errors in a wide variety of categories were observed. The data reported provided insight into the use and misuse of the features of Pascal by a large population of programmers. Some implications of these statistics on compiler implementation and programming language design are discussed. The number and variety of errors detected suggests that run-time checking mechanisms are more important and useful than is generally recognized, judging by the incompleteness of such mechanisms in many compilers.  相似文献   

11.
IC testing based on a full-scan design methodology and ATPG is the most widely used test strategy today. However, rapidly growing test costs are severely challenging the applicability of scan-based testing. Both test data size and number of test cycles increase drastically as circuit size grows and feature size shrinks. For a full-scan circuit, test data volume and test cycle count are both proportional to the number of test patterns N and the longest scan chain length L. To reduce test data volume and test cycle count, we can reduce N, L, or both. Earlier proposals focused on reducing the number of test patterns N through pattern compaction. All these proposals assume a 1-to-1 scan configuration, in which the number of internal scan chains equals the number of external scan I/O ports or test channels (two ports per channel) from ATE. Some have shown that ATPG for a circuit with multiple clocks using the multicapture clocking scheme, as opposed to one-hot clocking, generates a reduced number of test patterns.  相似文献   

12.
A new microcomputer program for interactive comparisons, analysis and calculation of in vitro solute transepithelial fluxes and nutrient uptakes and accumulations by intestinal tissues is described. The program is written in Microsoft GWBASIC for the widely distributed IBM-PC. Flux, uptake and accumulation values are obtained by computation from initial sequential data files and stored in sequential or random-orgaized files, depending on the specific process. Computed unidirectional fluxes are saved together with their associate electrical variables. The program calculates univariate statistics for unidirectional and net fluxes, and allows comparisons between selected groups of experiments by using the Student's t-test. An homocedasticity test is also provided. Active uptake or accumulations of solutes are calculated by substracting the predicted diffusive values at a determined concentration to total uptake or accumulation and from this, kinetic parameters are computed. The program was designed to be user-friendly and includes several functions to permit the user to manage and examine stored data. Examples of outputs illustrate applications of the program.  相似文献   

13.
Design of microwave components is an inherently multiobjective task. Often, the objectives are at least partially conflicting and the designer has to work out a suitable compromise. In practice, generating the best possible trade‐off designs requires multiobjective optimization, which is a computationally demanding task. If the structure of interest is evaluated through full‐wave electromagnetic (EM) analysis, the employment of widely used population‐based metaheuristics algorithms may become prohibitive in computational terms. This is a common situation for miniaturized components, where considerable cross‐coupling effects make traditional representations (eg, network equivalents) grossly inaccurate. This article presents a framework for accelerated EM‐driven multiobjective design of compact microwave devices. It adopts a recently reported nested kriging methodology to identify the parameter space region containing the Pareto front and to render a fast surrogate, subsequently used to find the first approximation of the Pareto set. The final trade‐off designs are produced in a separate, surrogate‐assisted refinement process. Our approach is demonstrated using a three‐section impedance matching transformer designed for the best matching and the minimum footprint area. The Pareto set is generated at the cost of only a few hundred of high‐fidelity EM simulations of the transformer circuit despite a large number of geometry parameters involved.  相似文献   

14.
Stratification of study subjects by one or more covariates is a commonly accepted method for dealing with confounding and effect modification in epidemiologic case-control studies. A flexible FORTRAN program is described which facilitates simultaneous stratification by several covariates and which produces summary odds ratio estimates and chi-square statistics by the Mantel-Haenszel method. It also facilitates detection of effect modification by each covariate considered individually. Straightforward means are provided for the user to modify input data before analysis or to exclude certain subjects from analysis, simulating such capabilities in larger statistical packages.  相似文献   

15.
HEpiMA: software for the identification of heterogeneity in meta-analysis   总被引:2,自引:0,他引:2  
Meta-analysis is a quantitative method available to epidemiologists, psychologists, social scientists and others who wish to produce a summary measure of the effect of exposure on disease, based on results from published studies along with a summary measure of uncertainty. The magnitude of the effect vary from study to study because of differences in the features of these studies (design, population, control of confounding variables etc.). From the various studies an estimator is formed by pooling the results found in each study in one summary measure. This summary (or pooled) measure is meaningful only if the magnitude of heterogeneity between study effects is small and can be explained by sampling variation. In this paper, we present HEpiMA, a new comprehensive and user-friendly software program for epidemiologic meta-analysis. HEpiMA has new features that are not available in other programs. The program carries out a complete study of heterogeneity of study effects with 11 hypothesis test results. In addition to model-based methods, the program also implements bootstrap methodology. New useful estimators of heterogeneity, Ri and CV(B), developed by the authors are given in the output. In addition to these unique features, the major advantage of this software is the option for direct entry of adjusted relative risk estimates of individual studies, the most common form of presentation of results in the epidemiologic literature. This program may also be useful for meta-analysts of clinical trials, in which the relative risk is the parameter of interest as it also allows the entry of crude data under the form of 2x2 tables.  相似文献   

16.
A method for selecting surrogate models in crashworthiness optimization   总被引:2,自引:2,他引:0  
Surrogate model or response surface based design optimization has been widely adopted as a common process in automotive industry, as large-scale, high fidelity models are often required. However, most surrogate models are built by using a limited number of design points without considering data uncertainty. In addition, the selection of surrogate model in the literature is often arbitrary. This paper presents a Bayesian metric to complement root mean square error for selecting the best surrogate model among several candidates in a library under data uncertainty. A strategy for automatically selecting the best surrogate model and determining a reasonable sample size was proposed for design optimization of large-scale complex problems. Lastly, a vehicle example with full-frontal and offset-frontal impacts was presented to demonstrate the proposed methodology.  相似文献   

17.
XML documents are becoming popular for business process integration. To achieve interoperability between applications, XML documents must also conform to various commonly used data type definitions (DTDs). However, most business data are not maintained as XML documents. They are stored in various native formats, such as database tables or LDAP directories. Hence, a middleware is needed to dynamically generate XML documents conforming to predefined DTDs from various data sources. As industrial consortia and large corporations have created various DTDs, it is both challenging and time-consuming to design the necessary middleware to conform to so many different DTDs. This problem is particularly acute for a small- or medium-sized enterprise because it lacks the IT skills to quickly develop such a middleware. In this paper, we present XLE, an XML Lightweight Extractor, as a practical approach to dynamically extracting DTD-conforming XML documents from heterogeneous data sources. XLE is based on a framework called DTD source annotation (DTDSA). It treats a DTD as the control structure of a program. The annotations become the program statements, such as functions and assignments. DTD-conforming XML documents are generated by parsing annotated DTDs. Basically, DTD annotations describe declaratively the mappings between target XML documents and the source data. The XLE engine implements a few basic annotations, providing a practical solution for many small- and medium-sized enterprises. However, XLE is designed to be versatile. It allows sophisticated users to plug in their own implementations to access new types of data or to achieve better performance. Heterogeneous data sources can be simply specified in the annotations. A GUI tool is provided to highlight the places where annotations are needed.  相似文献   

18.
Software reliability is one of the most important software quality indicators. It is concerned with the probability that the software can execute without any unintended behavior in a given environment. In previous research we developed the Reliability Prediction System (RePS) methodology to predict the reliability of safety critical software such as those used in the nuclear industry. A RePS methodology relates the software engineering measures to software reliability using various models, and it was found that RePS’s using Extended Finite State Machine (EFSM) models and fault data collected through various software engineering measures possess the most satisfying prediction capability. In this research the EFSM-based RePS methodology is improved and implemented into a tool called Automated Reliability Prediction System (ARPS). The features of the ARPS tool are introduced with a simple case study. An experiment using human subjects was also conducted to evaluate the usability of the tool, and the results demonstrate that the ARPS tool can indeed help the analyst apply the EFSM-based RePS methodology with less number of errors and lower error criticality.  相似文献   

19.
AD7745是一种响应快速兼超低功耗的高精度电容数字转换器,其采用的通信总线I2C是一种简单双向的两线制同步串行总线,它只需要一根数据线和一根时钟线即可实现连接器件之间的数据传送,但目前广泛应用的51系列单片机均不具有这种接口;文章详细研究了AD7745芯片内部的测量原理及寄存器配置方式,设计了其与单片机的硬件连接电路,使用STC89C52RC单片机的两条普通I/O线,通过软件编程模拟I2C总线所要求的操作时序,实现了对AD7745的控制与数据读取;同时,给出了其中关键程序,调试结果证明了该系统的有效性与稳定性。  相似文献   

20.
This study investigates anthropometric and kinetic characteristics of Korean adults. Dimensions, immersion method for volumes and reaction board method for centers of masses are directly measured.The anthropometric characteristics of eighteen body segments on a sample of 1199 male subjects and 937 female subjects whose ages range between 20 and 39 in Kim et al. (1992), are used to estimate segment lengths as a fraction of body height. Thirty-one male subjects and 29 female subjects in their twenties and thirties are served for anthropometric and kinetic measurements of body segments according to Röhrer index.Obtained data are compared with cadaver data in Dempster (1955), Matsui (1958)and Clauser et al. (1969). Also, to observe anthropometric and kinetic trends of Korean adults, results are compared with the results in Jung (1993)and Lim (1994).Relevance to industryObtained anthropometric and kinetic data can be applied to areas such as workspace design, statistical guidelines for product design, human movement analysis, human manikin development, vehicle seats and furniture design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号