首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The accuracy of performance-prediction models is crucial for widespread adoption of performance prediction in industry. One of the essential accuracy-influencing aspects of software systems is the dependence of system behaviour on a configuration, context or history related state of the system, typically reflected with a (persistent) system attribute. Even in the domain of component-based software engineering, the presence of state-reflecting attributes (the so-called internal states) is a natural ingredient of the systems, implying the existence of stateful services, stateful components and stateful systems as such. Currently, there is no consensus on the definition or method to include state-related information in component-based prediction models. Besides the task to identify and localise different types of stateful information across component-based software architecture, the issue is to balance the expressiveness and complexity of prediction models via an effective abstraction of state modelling. In this paper, we identify and classify stateful information in component-based software systems, study the performance impact of the individual state categories, and discuss the costs of their modelling in terms of the increased model size. The observations are formulated into a set of heuristics-guiding software engineers in state modelling. Finally, practical effect of state modelling on software performance is evaluated on a real-world case study, the SPECjms2007 Benchmark. The observed deviation of measurements and predictions was significantly decreased by more precise models of stateful dependencies.  相似文献   

2.
The expansion of wireless communication and mobile hand-held devices makes it possible to deploy a broad range of applications on mobile terminals such as PDAs and mobile phones. The constant context changes of mobile users oblige them to carry out many deployment tasks of the same application in order to obtain an application whose configuration satisfies the context requirements. The difficulty and the frequency of these deployment tasks led us to study the deployment in a mobile environment and to look for a solution for the automation of the deployment adaptation to the context. This paper studies the deployment sensitivity to the context in order to identify the variable deployment parameters and to analyze the impact of the deployment adaptation on the production life cycle of applications. The contribution made by this paper consists in an innovative middleware entity called Context-Aware Deployment of COMPonents (CADeComp), which can be plugged into existing middleware deployment services. CADeComp defines a flexible data model that facilitates the tasks of component producers and application assemblers by allowing them to specify the meta-information required to adapt the deployment to the context. The advantage of CADeComp is that it is based on reliable adaptive mechanisms that are defined by a platform-independent model according to the MDA approach. We propose a mapping of the CADeComp model to CCM. CADeComp was implemented and evaluated on this platform.  相似文献   

3.
In this paper, we present an original approach for enabling online reconfiguration of component-based applications. This research fits into our component composition methodology PacoSuite, that makes use of explicit connectors between components, called composition patterns. Both components and composition patterns are documented by making use of a special kind of MSC. We propose an algorithm to check whether a new component can fulfill the role of an old component in a given composition pattern, without the need to revalidate the entire composition all over again. To enable online reconfiguration, we extend the documentation of a component with a new primitive that specifies when a component reaches a safe state. This approach enables to swap a component at run-time, while maintaining a consistent application.  相似文献   

4.
《Performance Evaluation》2006,63(4-5):265-277
Performance prediction for parallel applications running in heterogeneous clusters is difficult to accomplish due to the unpredictable resource contention patterns that can be found in such environments. Typically, components of a parallel application will contend for the use of resources among themselves and with entities external to the application, such as other processes running in the computers of the cluster. The performance modeling approach should be able to represent these sources of contention and to produce an estimate of the execution time, preferably in polynomial time. This paper presents a polynomial time static performance prediction approach in which the prediction takes the form of an interval of values instead of a single value. The extra information given by an interval of values represents the variability of the underlying environment more accurately, as indicated by the practical examples presented.  相似文献   

5.
This paper presents H, a minimalistic specification language for designing heterogeneous software applications, particularly in the realms of robotics and industria, which takes advantage of a Component-Based Software Engineering (CBSE) approach. H copes with some of the most outstanding characteristics of these systems, like diversity at different levels (hardware platforms, programming languages, programmer skills), network distribution, real time and fault-tolerance. The H specification covers the life-cycle of any heterogeneous application. Its development system offers to the designer and/or builder a set of tools for specifying modules, generating code semiautomatically, debugging, maintenance, and a real time analysis of the system.  相似文献   

6.
Component-based software is becoming an increasingly popular technology as a means for creating complex software systems by assembling off-the-shelf building blocks. However, many of the component-based methodologies that use large components fail to address issues of size, real-time performance, power, and cost, as well as problems associated with the configuration process itself. These issues are critical for using components in embedded systems  相似文献   

7.
Reliability is a key driver of safety-critical systems such as health-care systems and traffic controllers. It is also one of the most important quality attributes of the systems embedded into our surroundings, e.g. sensor networks that produce information for business processes. Therefore, the design decisions that have a great impact on the reliability of a software system, i.e. architecture and components, need to be thoroughly evaluated. This paper addresses software reliability evaluation during the design and implementation phases; it provides a coherent approach by combining both predicted and measured reliability values with heuristic estimates in order to facilitate a smooth reliability evaluation process. The approach contributes by integrating the component-level reliability evaluation activities (i.e. the heuristic reliability estimation, model-based reliability prediction and model-based reliability measuring of components) and the system-level reliability prediction activity to support the incremental and iterative development of reliable component-based software systems. The use of the developed reliability evaluation approach with the supporting tool chain is illustrated by a case study. The paper concludes with a summary of lessons learnt from the case studies.  相似文献   

8.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   

9.
Predicting distributed application performance is a constant challenge to researchers, with an increased difficulty when heterogeneous systems are involved. Research conducted so far is limited by application type, programming language, or targeted system. The employed models become too complex and prediction cost increases significantly. We propose dPerf, a new performance prediction tool. In dPerf, we extended existing methods from the frameworks Rose and SimGrid. New methods have also been proposed and implemented such that dPerf would perform (i) static code analysis and (ii) trace-based simulation. Based on these two phases, dPerf predicts the performance of C, C++ and Fortran applications communicating using MPI or P2PSAP. Neither one of the used frameworks was developed explicitly for performance prediction, making dPerf a novel tool. dPerf accuracy is validated by a sequential Laplace code and a parallel NAS benchmark. For a low prediction cost and a high gain, dPerf yields accurate results.  相似文献   

10.
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. Such systems typically have stringent requirements for performance and scalability as they provide business and mission critical services. While the use of event-based communication enables loosely-coupled interactions between components and leads to improved system scalability, it makes it much harder for developers to estimate the system’s behavior and performance under load due to the decoupling of components and control flow. In this paper, we present our approach enabling the modeling and performance prediction of event-based systems at the architecture level. Applying a model-to-model transformation, our approach integrates platform-specific performance influences of the underlying middleware while enabling the use of different existing analytical and simulation-based prediction techniques. In summary, the contributions of this paper are: (1) the development of a meta-model for event-based communication at the architecture level, (2) a platform aware model-to-model transformation, and (3) a detailed evaluation of the applicability of our approach based on two representative real-world case studies. The results demonstrate the effectiveness, practicability and accuracy of the proposed modeling and prediction approach.  相似文献   

11.
Systems and software architects require quantitative dependability evaluations, which allow them to compare the effect of their design decisions on dependability properties. For security, however, quantitative evaluations have proven difficult, especially for component-based systems. In this paper, we present a risk-based approach that creates modular attack trees for each component in the system. These modular attack trees are specified as parametric constraints, which allow quantifying the probability of security breaches that occur due to internal component vulnerabilities as well as vulnerabilities in the component’s deployment environment. In the second case, attack probabilities are passed between system components as appropriate to model attacks that exploit vulnerabilities in multiple system components. The probability of a successful attack is determined with respect to a set of attack profiles that are chosen to represent potential attackers and corresponding environmental conditions. Based on these attack probabilities and the structure of the modular attack trees, risk measures can be estimated for the complete system and compared with the tolerable risk demanded by stakeholders. The practicability of this approach is demonstrated with an example that evaluates the confidentiality of a distributed document management system.  相似文献   

12.
The Journal of Supercomputing - We present PPT-Multicore, an analytical model embedded in the Performance Prediction Toolkit (PPT) to predict parallel applications’ performance running on a...  相似文献   

13.
Increasing sustainability requirements make evaluating different design options for identifying energy-efficient design ever more important. These requirements demand simulation models that are not only accurate but also fast. Machine Learning (ML) enables effective mimicry of Building Performance Simulation (BPS) while generating results much faster than BPS. Component-Based Machine Learning (CBML) enhances the capabilities of the monolithic ML model. Extending monolithic ML approach, the paper presents deep-learning architectures, component development methods and evaluates their suitability for space exploration in building design. Results indicate that deep learning increases the performance of models over simple artificial neural network models. Methods such as transfer learning and Multi-Task Learning make the component development process more efficient. Testing the deep-learning model on 201 new design cases indicates that its cooling energy prediction (R2: 0.983) is similar to BPS, while errors for heating energy predictions (R2: 0.848) are higher than BPS. Higher heating energy prediction error can be resolved by collecting heating data using better design space sampling methods that cover the heating demand distribution effectively. Given that the accuracy of the deep-learning model for heating predictions can be increased, the major advantage of deep-learning models over BPS is their high computation speed. BPS required 1145 s to simulate 201 design cases. Using the deep-learning model, similar results can be obtained in 0.9 s. High computation speed makes deep-learning models suitable for design space exploration.  相似文献   

14.
Voas  J. 《Software, IEEE》1998,15(4):22-27
As we continue to move toward component-based software engineering, software development will become more like traditional manufacturing: developers will code less and design and integrate more. The author argues that to reap the benefits of component-based development: reduced time to market, more user choice, and lower costs, we must rethink our software maintenance strategies. He gives a wide-ranging overview of the maintenance challenges raised by component-based development  相似文献   

15.
The prediction of query performance is an interesting and important issue in Information Retrieval (IR). Current predictors involve the use of relevance scores, which are time-consuming to compute. Therefore, current predictors are not very suitable for practical applications. In this paper, we study six predictors of query performance, which can be generated prior to the retrieval process without the use of relevance scores. As a consequence, the cost of computing these predictors is marginal. The linear and non-parametric correlations of the proposed predictors with query performance are thoroughly assessed on the Text REtrieval Conference (TREC) disk4 and disk5 (minus CR) collection with the 249 TREC topics that were used in the recent TREC2004 Robust Track. According to the results, some of the proposed predictors have significant correlation with query performance, showing that these predictors can be useful to infer query performance in practical applications.  相似文献   

16.
Digital libraries and information management systems are increasingly being developed according to component models with well-defined APIs and often with Web-accessible interfaces. In parallel with metadata access and harvesting, Web 2.0 mashups have demonstrated the flexibility of developing systems as independent distributed components. It can be argued that such distributed components also can be an enabler for scalability of service provision in medium to large systems. To test this premise, this article discusses how an existing component framework was modified to include support for scalability. A set of lightweight services and extensions were created to migrate and replicate services as the load changes. Experiments with the prototype system confirm that this system can in fact be quite effective as an enabler of transparent and efficient scalability, without the need to resort to complex middleware or substantial system reengineering. Finally, specific problems areas have been identified as future avenues for exploration at the crucial intersection of digital libraries and high-performance computing.  相似文献   

17.
This paper presents COnfECt, a model learning approach, which aims at recovering the functioning of a component-based system from its execution traces. We refer here to non concurrent systems whose internal interactions among components are not observable from the environment. COnfECt is specialised into the detection of components of a black-box system and in the inference of models called systems of labelled transition systems (LTS). COnfECt tries to detect components and their specific behaviours in traces, then it generates LTS for every component discovered, which captures its behaviours. Besides, it synchronises the LTSs together to express the functioning of the whole system. COnfECt relies on machine learning techniques to build models: it uses the notion of correlation among actions in traces to detect component behaviours and exploits a clustering technique to merge similar LTSs and synchronise them. We describe the three steps of COnfECt and the related algorithms in this paper. Then, we present some preliminary experimentations.  相似文献   

18.
构件软件的可靠性估算模型   总被引:1,自引:0,他引:1  
周娜琴  张友生 《计算机应用》2008,28(6):1630-1631
把基于构件的软件看作是一个Markov过程,为弥补以往忽视连接件作用的情况,针对过程中不同状态类型,构建出基于不同状态的构件和连接件使用频率计算模型。在此基础上,提出了整个基于构件的软件可靠性计算方法,并将其实例化。与传统的方法相比,该方法不仅提供了一种更精确分析软件可靠性的方法,而且拓宽了模型的应用范围。  相似文献   

19.
20.
We present enforceable component-based realtime contracts, the first extension of component-based software engineering technology that comprehensively supports adaptive realtime systems from specification all the way to the running system.To provide this support, we have extended component-based interface definition languages (IDLs) and component representations in repositories to express realtime requirements for components. The final software, which is assembled from the components, is then executed on a realtime operating system (RTOS) with the help of a component runtime system. RTOS resource managers and the IDL-extensions are based on the same mathematical foundation. Thus, the component runtime system can use information expressed in a component-oriented manner in the extended IDL to derive parameters for the task-based admission and scheduling in the RTOS. Once basic realtime properties can thus be guaranteed, runtime support can be extended to more elaborate schemes that also support adaptive applications (container-managed quality assurance).We claim that this study convincingly demonstrates how component-based software engineering can be extended to build systems with non-functional requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号