首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Software product line engineering seeks to systematise reuse when developing families of similar software systems so as to minimise development time, cost and defects. To realise variability at the code level, product line methods classically advocate usage of inheritance, components, frameworks, aspects or generative techniques. However, these might require unaffordable paradigm shifts for developers if the software was not thought at the outset as a product line. Furthermore, these techniques can be conflicting with a company’s coding practices or external regulations. These concerns were the motivation for the industry–university collaboration described in this paper in which we developed a minimally intrusive coding technique based on tags. The approach was complemented with traceability from code to feature diagrams which were exploited for automated configuration. It is supported by a toolchain and is now in use in the partner company for the development of flight-grade satellite communication software libraries.  相似文献   

3.
Spectrum-based fault localization is amongst the most effective techniques for automatic fault localization. However, abstractions of program execution traces, one of the required inputs for this technique, require instrumentation of the software under test at a statement level of granularity in order to compute a list of potential faulty statements. This introduces a considerable overhead in the fault localization process, which can even become prohibitive in, e.g., resource constrained environments. To counter this problem, we propose a new approach, coined dynamic code coverage (DCC), aimed at reducing this instrumentation overhead. This technique, by means of using coarser instrumentation, starts by analyzing coverage traces for large components of the system under test. It then progressively increases the instrumentation detail for faulty components, until the statement level of detail is reached. To assess the validity of our proposed approach, an empirical evaluation was performed, injecting faults in six real-world software projects. The empirical evaluation demonstrates that the dynamic code coverage approach reduces the execution overhead that exists in spectrum-based fault localization, and even presents a more concise potential fault ranking to the user. We have observed execution time reductions of 27% on average and diagnostic report size reductions of 77% on average.  相似文献   

4.
Model-driven code generation has been investigated in traditional and object-oriented design paradigms; significant progress has been made. It offers many advantages including the rapid development of high quality code. Errors are reduced and the consistency between the design and the code is retained, in comparison with a purely manual approach. Here, a model-driven code generation approach based on graph transformations for aspect-oriented development is proposed. The approach has two main transformation activities. The first activity transforms a visual (graphical) model of the design into a formal, text-based notation that can be readily processed. The graphical model is created by the software designer and uses a UML profile for aspect-oriented software (i.e., FDAF) to represent aspects and their components. XML is the target notation for this step; the transformation uses the XML meta-model to ensure that the output complies with the language. The second activity transforms the XML model into AspectJ source code. The transformation uses the AspectJ meta-model to ensure the output complies with the language. The transformations from the extended UML model to XML and from XML to AspectJ code are fully automated. The transformation algorithms are based on graph transformations; tool support has been developed. Key technical issues in the approach are discussed, including performance, the amount of code generated, correctness, and adaptability, in addition to a comparison of the proposal with existing alternative approaches. The approach has been validated on three example systems: a banking system, classroom scheduling system, and an insurance system. The banking system example is presented in the paper.  相似文献   

5.
If the dynamics of an evolutionary differential equation system possess a low-dimensional, attracting, slow manifold, there are many advantages to using this manifold to perform computations for long term dynamics, locating features such as stationary points, limit cycles, or bifurcations. Approximating the slow manifold, however, may be computationally as challenging as the original problem. If the system is defined by a legacy simulation code or a microscopic simulator, it may be impossible to perform the manipulations needed to directly approximate the slow manifold. In this paper we demonstrate that with the knowledge only of a set of “slow” variables that can be used toparameterize the slow manifold, we can conveniently compute, using a legacy simulator, on a nearby manifold. Forward and reverse integration, as well as the location of fixed points are illustrated for a discretization of the Chafee-Infante PDE for parameter values for which an Inertial Manifold is known to exist, and can be used to validate the computational results.  相似文献   

6.
7.
In this paper, a mixed approach for probabilistic structural durability design of mechanical systems is proposed. In this approach, a deterministic design optimization that considers structural crack initiation and crack propagation lives at critical points of the structural component as design constraints is performed first. After an optimal design is obtained, a reliability analysis is performed to ascertain if the deterministic optimal design is reliable. If the probability of the failure of the deterministic optimal design is found to be acceptable, a reliability-based design approach that employs a set of interactive design steps, such as trade-off analysis and what-if study, is used to obtain a near-optimal design that is reliable with an affordable computational cost. A 3-D tracked vehicle roadarm is employed to demonstrate the feasibility of the proposed approach.  相似文献   

8.
An approach to code motion and hoisting, a program optimization technique, is discussed. The safety and profitability of optimization in general and hoisting in particular are analyzed. By restricting the analysis to a spanning tree imposed on the program graph, a linear algorithm is developed that provides sufficient but not necessary conditions for hoisting.  相似文献   

9.
A supervised discriminant mixed integer programming algorithm (DISMIP) is described which achieves either linear or non-linear separation, without assuming any specific probability distribution. This system offers greater flexibility in dealing with problems of multi-spectral classification. If the training sets are disjoint, a strictly separating surface is generated that maximizes a “dead zone” between the sets. If the sets intersect, a surface is generated that minimizes a specified misclassification error. The system has been experimentally tested in three practical applications and the results are given in comparison with a supervised classification using the LARSIS classifier.(1)  相似文献   

10.
ContextSoftware networks are directed graphs of static dependencies between source code entities (functions, classes, modules, etc.). These structures can be used to investigate the complexity and evolution of large-scale software systems and to compute metrics associated with software design. The extraction of software networks is also the first step in reverse engineering activities.ObjectiveThe aim of this paper is to present SNEIPL, a novel approach to the extraction of software networks that is based on a language-independent, enriched concrete syntax tree representation of the source code.MethodThe applicability of the approach is demonstrated by the extraction of software networks representing real-world, medium to large software systems written in different languages which belong to different programming paradigms. To investigate the completeness and correctness of the approach, class collaboration networks (CCNs) extracted from real-world Java software systems are compared to CCNs obtained by other tools. Namely, we used Dependency Finder which extracts entity-level dependencies from Java bytecode, and Doxygen which realizes language-independent fuzzy parsing approach to dependency extraction. We also compared SNEIPL to fact extractors present in language-independent reverse engineering tools.ResultsOur approach to dependency extraction is validated on six real-world medium to large-scale software systems written in Java, Modula-2, and Delphi. The results of the comparative analysis involving ten Java software systems show that the networks formed by SNEIPL are highly similar to those formed by Dependency Finder and more precise than the comparable networks formed with the help of Doxygen. Regarding the comparison with language-independent reverse engineering tools, SNEIPL provides both language-independent extraction and representation of fact bases.ConclusionSNEIPL is a language-independent extractor of software networks and consequently enables language-independent network-based analysis of software systems, computation of design software metrics, and extraction of fact bases for reverse engineering activities.  相似文献   

11.
Linear mixed models with skew-elliptical distributions: A Bayesian approach   总被引:1,自引:0,他引:1  
Normality of random effects and error terms is a routine assumption for linear mixed models. However, such an assumption may be unrealistic, obscuring important features of within- and among-unit variation. A simple and robust Bayesian parametric approach that relaxes this assumption by using a multivariate skew-elliptical distribution, which includes the Skew-t, Skew-normal, t-Student, and Normal distributions as special cases and provides flexibility in capturing a broad range of non-normal and asymmetric behavior is presented. An appropriate posterior simulation scheme is developed and the methods are illustrated with an application to a longitudinal data example.  相似文献   

12.
Code smells are a popular mechanism to find structural design problems in software systems. Consequently, several tools have emerged to support the detection of code smells. However, the number of smells returned by current tools usually exceeds the amount of problems that the developer can deal with, particularly when the effort available for performing refactorings is limited. Moreover, not all the code smells are equally relevant to the goals of the system or its health. This article presents a semi-automated approach that helps developers focus on the most critical problems of the system. We have developed a tool that suggests a ranking of code smells, based on a combination of three criteria, namely: past component modifications, important modifiability scenarios for the system, and relevance of the kind of smell. These criteria are complementary and enable our approach to assess the smells from different perspectives. Our approach has been evaluated in two case-studies, and the results show that the suggested code smells are useful to developers.  相似文献   

13.
The optimal operation of pumps in a large water supply system under time-of-use electricity rates is formulated as a mixed integer programming (MIP) problem. The problem is solved using an iterative linear programming (LP) scheme. The scheme is applied to an actual world problem, the City of Inglewood Water Supply System. Computational results are presented and termination criteria for the solution scheme are discussed.  相似文献   

14.
An approach to perform a system wide analysis of hospital resources and capacity has been developed. Embedded within an intelligent system it would provide planners and management capability to strategically improve the efficiency of their hospitals today and a means to create more efficient hospitals in the future. In theory, this approach can help hospitals with a variety of capacity planning and resource allocation activities. On a day to day basis it can be used to perform a variety of important capacity querying activities. In addition, it can be used to predict the future performance of a hospital and the effect of structural and parametric changes within the hospital. The approach consists of a mixed integer linear programming (MILP) model and a number of advanced extensions. The MILP models can determine the maximum number of patients of each type that can be treated within a given period of time or the time required to process a given cohort of patients. A case study of a large public hospital has been performed to validate our approach. Extensive numerical investigations successfully demonstrate the applicability of the approach to real sized health care applications and the great potential for further research and development on this topic.  相似文献   

15.
Sequencing in mixed model assembly lines: A genetic algorithm approach   总被引:2,自引:0,他引:2  
The mixed model assembly lines are becoming increasingly popular in a wide area of industries. We consider the sequencing problem in mixed model assembly lines, which is critical for efficient utilization of the lines. We extend standard formulation of the problem to allow a hybrid assembly line, in which closed and open workstations are intermixed, and sequence-dependent setup time. A new approach using an artificial intelligence search technique, called genetic algorithm, is proposed. A genetic representation suitable for the problem is investigated, and genetic control parameters that yield good results are empirically found. A new genetic operator, Immediate Successor Relation Crossover (ISRX), is introduced and several existing ones are modified. An extensive experiment is carried out to determine a proper choice of the genetic operators. The performance of the genetic algorithm is compared with those of heuristic algorithm and of branch-and-bound method. The results show that our algorithm greatly reduces the computation time and its solution is very close to the optimal solution. We have identified the ISRX operator to play a significant role in improving the performance.  相似文献   

16.
Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy.  相似文献   

17.
累积码作为非规则RA码的分量码之一,除了在纠删信道上具有显式的EXIT函数之外,在一般信道上其EXIT曲线均由仿真获得。由仿真获得累积码EXIT曲线不但增加了非规则RA码的设计复杂度,而且使得整个设计过程无法实时完成。首先证明对于累积码而言,串行和积算法与BCJR算法是等价的;在此基础上,证明了累积码在串行和积译码时,与译码中间变量相关的互信息具有平稳特性;利用该平稳特性,推导了累积码EXIT函数的隐式表达式,从而给出一种求解累积码EXIT曲线的高效数值算法。采用该方法获得的EXIT曲线与仿真结果一致,能够显著降低非规则RA码的设计复杂度,并使非规则RA码的设计实时化。  相似文献   

18.
This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm. This extends previous work utilizing a hybrid source coder which combines an orthogonal transformation with differential pulse code modulation (DPCM). The data compression is achieved in the DPCM loop, and it is the quantizer of this scheme which is approached from an unsupervised learning procedure. The distribution defining the quantizer is represented as a set of separable Laplacian mixture densities for two-dimensional images. The condition of identifiability is shown for the Laplacian case and a decision directed estimate of both the active distribution parameters and the mixing parameters are discussed in view of a Bayesian structure. The decision directed estimators, although not optimum, provide a realizable structure for estimating the parameters which define a distribution which has become active. These parameters are then used to scale the optimum (in the mean square error sense) Laplacian quantizer. The decision criteria is modified to prevent convergence to a single distribution which in effect is the default condition for a variance estimator. This investigation was applied to a test image and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.  相似文献   

19.
Scandura  J.M. 《Computer》1994,27(4):55-61
Describes a code conversion tool that helps programmers visualize and understand system design. The author first reviews current software reengineering tools and then describe a new cognitive approach to system (re)engineering based on code comprehension tools that provide a visual representation of code containing less cognitive noise. This better enables programmers to understand system design. The approach integrates code comprehension tools with current reengineering methodologies to create an integrated reengineering workbench for converting legacy code into newer languages such as Ada or C/C++  相似文献   

20.
The prevalence of moving object data (MOD) brings new opportunities for behavior related research. Periodic behavior is one of the most important behaviors of moving objects. However, the existing methods of detecting periodicities assume a moving object either does not have any periodic behavior at all or just has a single periodic behavior in one place. Thus they are incapable of dealing with many real world situations whereby a moving object may have multiple periodic behaviors mixed together. Aiming at addressing this problem, this paper proposes a probabilistic periodicity detection method called MPDA. MPDA first identifies high dense regions by the kernel density method, then generates revisit time sequences based on the dense regions, and at last adopts a filter-refine paradigm to detect mixed periodicities. At the filter stage, candidate periods are identified by comparing the observed and reference distribution of revisit time intervals using the chi-square test, and at the refine stage, a periodic degree measure is defined to examine the significance of candidate periods to identify accurate periods existing in MOD. Synthetic datasets with various characteristics and two real world tracking datasets validate the effectiveness of MPDA under various scenarios. MPDA has the potential to play an important role in analyzing complicated behaviors of moving objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号