首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the benefits of software product line approach is to improve time-to-market. The changes in market needs cause software requirements to be flexible in product lines. Whenever software requirements are changed, software architecture should be evolved to correspond with them. Therefore, domain architecture should be designed based on domain requirements. It is essential that there is traceability between requirements and architecture, and that the structure of architecture is derived from quality requirements. The purpose of this paper is to provide a framework for modeling domain architecture based on domain requirements within product lines. In particular, we focus on the traceable relationship between requirements and architectural structures. Our framework consists of processes, methods, and a supporting tool. It uses four basic concepts, namely, goal based domain requirements analysis, Analytical Hierarchy Process, Matrix technique, and architecture styles. Our approach is illustrated using HIS (Home Integration System) product line. Finally, industrial examples are used to validate DRAMA.  相似文献   

2.
The objective of this paper is to describe a grid-efficient parallel implementation of the Aitken–Schwarz waveform relaxation method for the heat equation problem. This new parallel domain decomposition algorithm, introduced by Garbey [M. Garbey, A direct solver for the heat equation with domain decomposition in space and time, in: Springer Ulrich Langer et al. (Ed.), Domain Decomposition in Science and Engineering XVII, vol. 60, 2007, pp. 501–508], generalizes the Aitken-like acceleration method of the additive Schwarz algorithm for elliptic problems. Although the standard Schwarz waveform relaxation algorithm has a linear rate of convergence and low numerical efficiency, it can be easily optimized with respect to cache memory access and it scales well on a parallel system as the number of subdomains increases. The Aitken-like acceleration method transforms the Schwarz algorithm into a direct solver for the parabolic problem when one knows a priori the eigenvectors of the trace transfer operator. A standard example is the linear three dimensional heat equation problem discretized with a seven point scheme on a regular Cartesian grid. The core idea of the method is to postprocess the sequence of interfaces generated by the additive Schwarz wave relaxation solver. The parallel implementation of the domain decomposition algorithm presented here is capable of achieving robustness and scalability in heterogeneous distributed computing environments and it is also naturally fault tolerant. All these features make such a numerical solver ideal for computational grid environments. This paper presents experimental results with a few loosely coupled parallel systems, remotely connected through the internet, located in Europe, Russia and the USA.  相似文献   

3.
4.
In near-wall turbulence modeling it is required to resolve a thin layer nearby the solid boundary, which is characterized by high gradients of the solution. An accurate enough resolution of such a layer can take most computational time. The situation even becomes worse for unsteady problems. To avoid time-consuming computations, a new approach is developed, which is based on a non-overlapping domain decomposition. The boundary condition of Robin type at the interface boundary is achieved via transfer of the boundary condition from the wall. For the first time interface boundary conditions of Robin type are derived for a model nonstationary equation which simulates the key terms of the unsteady boundary layer equations. In the case of stationary solutions the approach is automatically reduced to the technique earlier developed for the steady problems. The considered test cases demonstrate that unsteady effects can be significant for near-wall domain decomposition. In particular, they can be important in the case of the wall-function-based approach.  相似文献   

5.
6.
Digital manufacturing technologies [1] are gaining more and more importance as key enabling technologies in future manufacturing, especially when a flexible scalable manufacturing of small medium series of customized parts is required. The paper describes a new approach for design manufacturing of complex three dimensional components building on a combination of digital manufacturing technologies such as laminated objects manufacturing, laser and e-printing technologies. The micro component is made up of stacks of functionalized layers of polymer films. The concept is currently developed further in the project SMARTLAM [2], [3], funded by the European Commission. The manufacturing system is based on a flexible, scalable and modular equipment and application features approach which enables the manufacturing of different small size batches without tool or mask making in short time. Different modules can be combined by defined hardware and software interfaces. Avoiding time consumable and difficult programming caused by manufacturing a new conceptual approach a Function-Block Runtime (FORTE) executes generated control application platform-independently and coordinates component module functionalities. The control system is designed to integrate all processes as well as the base platform with features far beyond ordinary PLC systems. One aspect is the use of process data out of the data acquisition system to simulate and optimize the processes. These results are incorporated into the main machine control system. Another aspect is the vision system for flexible quality control and closed-loop positioning control with visual servoing.The paper shows the overall concept of SMARTLAM and exemplarily demonstrates the control system as well as the modular equipment approach by the example of the control system for alignment of different stacks and inspection system.  相似文献   

7.
The evaluation of environmentally conscious manufacturing programs is similar to many strategic initiatives and their justification methodologies. This similarity arises from the fact that there are multiple factors that need to be considered, many of which have long-term and broad implications for an organization. The types of programs that could be evaluated range from the appropriate selection of product designs and materials to major disassembly programs that may be implemented in parallel with standard assembly programs. The methodology will involve the synthesis of the analytical network process (ANP) and data envelopment analysis (DEA). We consider some of the more recent modeling innovations in each of these areas to help us address a critical and important decision that many managers and organizations are beginning to face. An illustrative example provides some insights into the application of this methodology. Additional issues and research questions within are also identified.  相似文献   

8.
A framework for empirical evaluation of conceptual modeling techniques   总被引:1,自引:0,他引:1  
The paper presents a framework for the empirical evaluation of conceptual modeling techniques used in requirements engineering. The framework is based on the notion that modeling techniques should be compared via their underlying grammars. The framework identifies two types of dimensions in empirical comparisons—affecting and affected dimensions. The affecting dimensions provide guidance for task definition, independent variables and controls, while the affected dimensions define the possible mediating variables and dependent variables. In particular, the framework addresses the dependence between the modeling task—model creation and model interpretation—and the performance measures of the modeling grammar. The utility of the framework is demonstrated by using it to categorize existing work on evaluating modeling techniques. The paper also discusses theoretical foundations that can guide hypothesis generation and measurement of variables. Finally, the paper addresses possible levels for categorical variables and ways to measure interval variables, especially the grammar performance measures.  相似文献   

9.
The aim of this paper is to present a generic component framework for system modeling that satisfies main requirements for component-based development in software engineering. In this sense, we have defined a framework that can be used, by providing an adequate instantiation, in connection with a large class of semi-formal and formal modeling techniques. Moreover, the framework is also flexible with respect to the connection of components, providing a compositional semantics of components. This means more precisely that the semantics of a system can be inferred from the semantics of its components. In contrast to other component concepts for data type specification techniques, our component framework is based on a generic notion of transformations. In particular, refinements and transformations are used to express intradependencies, between the export interface and the body of a component, and interdependencies, between the import and the export interfaces of different components. The generic component framework generalizes module concepts for different kinds of Petri nets and graph transformation systems proposed in the literature, and seems to be also suitable for visual modeling techniques, including parts of the UML, if these techniques provide a suitable refinement or transformation concept. In this paper the generic approach is instantiated in two steps. First to high-level replacement systems generalizing the transformation concept of graph transformations. In a second step it is further instantiated to low-level and high-level Petri nets. To show applicability we present sample components from a case study in the domain of production automation as proposed in a priority program of the German Research Council (DFG).  相似文献   

10.
This paper presents the development of ParMIN3P-THCm, a parallel version of the reactive transport code MIN3P-THCm, which can run efficiently on machines ranging from desktop PCs to supercomputers. Parallelization of ParMIN3P-THCm was achieved through the domain decomposition method based on the PETSc library. The code has been developed from the ground up for parallel scalability and has been tested for up to 768 processors with problem sizes up to 100 million unknowns, showing strong scalability in modeling large-scale reactive transport problems. The total speedup tends to be ideal and near linear up to 768 processors when the degrees of freedom per processor is larger than 8000–15,000, depending on the relative complexity of the reactive transport and flow problems. The improved code efficiency allows refining of the model discretization in both space and time and will facilitate 3D simulations that were impractical to carry out with the sequential version of MIN3P-THCm.  相似文献   

11.
12.
We consider a rational algebraic large sparse eigenvalue problem arising in the discretization of the finite element method for the dissipative acoustic model in the pressure formulation. The presence of nonlinearity due to the frequency-dependent impedance poses a challenge in developing an efficient numerical algorithm for solving such eigenvalue problems. In this article, we reformulate the rational eigenvalue problem as a cubic eigenvalue problem and then solve the resulting cubic eigenvalue problem by a parallel restricted additive Schwarz preconditioned Jacobi–Davidson algorithm (ASPJD). To validate the ASPJD-based eigensolver, we numerically demonstrate the optimal convergence rate of our discretization scheme and show that ASPJD converges successfully to all target eigenvalues. The extraneous root introduced by the problem reformulation does not cause any observed side effect that produces an undesirable oscillatory convergence behavior. By performing intensive numerical experiments, we identify an efficient correction-equation solver, an effective algorithmic parameter setting, and an optimal mesh partitioning. Furthermore, the numerical results suggest that the ASPJD-based eigensolver with an optimal mesh partitioning results in superlinear scalability on a distributed and parallel computing cluster scaling up to 192 processors.  相似文献   

13.
Stack filters are operators that commute with the thresholding operation, i.e., thresholding a signal, applying the binary filter on each thresholded binary signals, and then summing up (stacking) the results yields the same result as applying the multi-level (gray-scale) filter on the original signal. Several approaches for designing optimal stack filters from training data, where optimality is characterized in terms of costs based on input-output joint observations, have been proposed. This work considers stack filter design from training data under a general statistical framework developed in the context of morphological image operator design. This framework (1) provides a common point of view for distinct design approaches, being useful for comparative analysis or for emphasizing differences, (2) clearly answers the issue of why binary signals from different threshold levels, although following distinct distributions, can be pooled together in the cost estimation process, and (3) helps to show that several stack filter design approaches based on lattice diagrams search methods share a common underlying formulation.  相似文献   

14.
Knowledge of the business domain (e.g., insurance claim, human resources) is crucial to analysts’ ability to conduct good requirements analysis (RA). However, current practices afford little assistance to analysts in acquiring domain knowledge. We argue that traditional reuse repositories could be augmented by adding rich faceted information on component/services and artifacts such as business-process templates to help analysts acquire domain knowledge during RA. In this paper, we present the design of a Knowledge Based Component Repository (KBCR) for facilitating RA. Then, we report on the design and development of a KBCR prototype. We illustrate its application in a system that is populated with components and process templates for the auto insurance claim domain. An empirical study was conducted to assess its effectiveness in improving RA. Results showed that KBCR enhanced analysts’ business domain knowledge and helped them better prepare for RA. Our key research contribution is to offer analysts a rich repository (i.e., KBCR) containing domain knowledge that they could utilize to acquire domain knowledge that is crucial for carrying out RA. While repositories of reusable components have been employed for some time, no one has used such repositories to help analysts acquire domain knowledge in order improve the RA of the system.  相似文献   

15.
Work domain analysis (WDA) has been applied to a range of complex work domains, but few WDAs have been undertaken in medical contexts. One pioneering effort suggested that clinical abstraction is not based on means-ends relations, whereas another effort downplayed the role of bio-regulatory mechanisms. In this paper it is argued that bio-regulatory mechanisms that govern physiological behaviour must be part of WDA models of patients as the systems at the core of intensive care units. Furthermore it is argued that because the inner functioning of patients is not completely known, clinical abstraction is based on hypothetico-deductive abstract reasoning. This paper presents an alternative modelling framework that conforms to the broader aspirations of WDA. A modified version of the viable systems model is used to represent the patient system as a nested dissipative structure while aspects of the recognition primed decision model are used to represent the information resources available to clinicians in ways that support if...then conceptual relations. These two frameworks come together to form the recursive diagnostic framework, which may provide a more appropriate foundation for information display design in the intensive care unit.Abbreviations ADS Abstraction decomposition space - DST Dissipative structures theory - HIV/AIDS Human immunodeficiency virus/acquired immunodeficiency syndrome - ICU Intensive care unit - RDF Recursive diagnostic framework - RPD Recognition primed decision model - R-VSM Revised viable systems model - VSM Viable systems model - WDA Work domain analysis  相似文献   

16.
We present and describe a modeling and analysis framework for monitoring protected area (PA) ecosystems with net primary productivity (NPP) as an indicator of health. It brings together satellite data, an ecosystem simulation model (NASA-CASA), spatial linear models with autoregression, and a GIS to provide practitioners a low-cost, accessible ecosystem monitoring and analysis system (EMAS) at landscape resolutions. The EMAS is evaluated and assessed with an application example in Yellowstone National Park aimed at identifying the causes and consequences of drought. Utilizing five predictor covariates (solar radiation, burn severity, soil productivity, temperature, and precipitation), spatio-temporal analysis revealed how landscape controls and climate (summer vegetation moisture stress) affected patterns of NPP according to vegetation functional type, species cover type, and successional stage. These results supported regional and national trends of NPP in relation to carbon fluxes and lag effects of climate. Overall, the EMAS provides valuable decision support for PAs regarding informed land use planning, conservation programs, vital sign monitoring, control programs (fire fuels, invasives, etc.), and restoration efforts.  相似文献   

17.
The rapid growth of the Internet and the expansion of electronic commerce applications in manufacturing have given rise to electronic customer relationship management (e-CRM) which enhances the overall customer satisfaction. However, when confronted by the range of e-CRM methods, manufacturing companies struggle to identify the one most appropriate to their needs. This paper presents a novel structured approach to evaluate and select the best agile e-CRM framework in a rapidly changing manufacturing environment. The e-CRM frameworks are evaluated with respect to their customer and financial oriented features to achieve manufacturing agility. Initially, the e-CRM frameworks are prioritized according to their financial oriented characteristics using a fuzzy group real options analysis (ROA) model. Next, the e-CRM frameworks are ranked according to their customer oriented characteristics using a hybrid fuzzy group permutation and a four-phase fuzzy quality function deployment (QFD) model with respect to three main perspectives of agile manufacturing (i.e., strategic, operational and functional agilities). Finally, the best agile e-CRM framework is selected using a technique for order preference by similarity to the ideal solution (TOPSIS) model. We also present a case study to demonstrate the applicability of the proposed approach and exhibit the efficacy of the procedures and algorithms.  相似文献   

18.
A nonlinear dynamic behavioral model for radio frequency power amplifiers is presented. It uses orthonormal basis functions, Kautz functions, with complex poles that are different for each nonlinear order. It has the same general properties as Volterra models, but the number of parameters is significantly smaller. Using frequency weighting the out‐of‐band model error can be reduced. Using experimental data it was found that the optimal poles were the same for different input powers and for the different nonlinear orders. The optimal poles were also the same for direct and inverse models, which could be explained theoretically to be a general property of nonlinear systems with negligible linear memory effects. The model can be used as either a direct or inverse model with the same model error for power amplifiers with negligible linear memory effects. © 2007 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2007.  相似文献   

19.
针对当前通信辐射源个体识别方法准确率不高,特征提取效率不高等问题,提出了基于时间尺度分解(ITD)模型的通信辐射源特征提取算法。该算法提取了原始信号特征、信号经ITD分解后得到的固有旋转分量特征以及瞬时幅频谱的特征组成特征向量,使用支持向量机(SVM)得到分类结果。通过6部实际电台的分类实验结果表明:该算法在不需要先验信息的前提下,可以得到较好的分类效果,并且相对与经验模态分解(EMD)的特征提取在分类效果及运算效率上都有一定程度的提升。  相似文献   

20.
This paper describes a component‐based framework for radio‐astronomical imaging software systems. We consider optimal re‐use strategies for packages of disparate architectures brought together within a modern component framework. In this practical case study, the legacy codes include both procedural and object‐oriented architectures. We consider also the special requirements on scientific component middleware, with a specific focus on high‐performance computing. We present an example application in this component architecture and outline future development planned for this project. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号