首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
TNO is doing research in many areas of industrial automation and is heavily involved in European projects financed by R&D programmes such as Esprit, Eureka and Brite, and in many ISO and CEN standardization activities. From this experience it becomes clear that the I of Integration in CIM has not only to do with the integration of the so-called “islands of automation” but also with the integration of ”islands of manufacturing”: how we can improve the transfer of manufacturing knowledge. We have to increase the semantic content of our integration approaches, so that not only can computer scientist be involved, but also people from the companies we are trying to help, and people who are responsible for the development of new CIM components. The real problem is not a problem of technical integration of computers, but much more a “conceptual modelling” problem. Fundamental questions are, for instance, how we can, on the semantic level really required, model information transfer upstream and downstream in the product life cycle. Based on the analysis of existing CIM projects such as CAD*I, CIM- OSA, Open CAM Systems (Esprit I) IMPACT (Esprit II), CAM-I's CIM Architecture, the Danish Principal model for CIM, and more, we developed a generic and reusable CIM reference architecture. This architecture shows manufacturing activities, real and information flow objects, CIM components and industrial automation standards like STEP, MAP, TOP, EDIFACT, MMS etc. in an integrated way. In this paper we describe the CIM base model used to express the CIM reference architecture and give some details of the CIM reference architecture itself.  相似文献   

2.
3.
Service oriented architectures: approaches,technologies and research issues   总被引:15,自引:0,他引:15  
Service-oriented architectures (SOA) is an emerging approach that addresses the requirements of loosely coupled, standards-based, and protocol- independent distributed computing. Typically business operations running in an SOA comprise a number of invocations of these different components, often in an event-driven or asynchronous fashion that reflects the underlying business process needs. To build an SOA a highly distributable communications and integration backbone is required. This functionality is provided by the Enterprise Service Bus (ESB) that is an integration platform that utilizes Web services standards to support a wide variety of communications patterns over multiple transport protocols and deliver value-added capabilities for SOA applications. This paper reviews technologies and approaches that unify the principles and concepts of SOA with those of event-based programing. The paper also focuses on the ESB and describes a range of functions that are designed to offer a manageable, standards-based SOA backbone that extends middleware functionality throughout by connecting heterogeneous components and systems and offers integration services. Finally, the paper proposes an approach to extend the conventional SOA to cater for essential ESB requirements that include capabilities such as service orchestration, “intelligent” routing, provisioning, integrity and security of message as well as service management. The layers in this extended SOA, in short xSOA, are used to classify research issues and current research activities.  相似文献   

4.
Service Oriented Architecture (SOA) is a popular paradigm at present because it provides a standards-based conceptual framework for flexible and adaptable enterprise wide systems. This implies that most present systems need to be reengineered to become SOA compliant. However, SOA reengineering projects raise serious strategic as well as technical questions that require management oversight. This paper, based on practical experience with SOA projects, presents a decision model for SOA reengineering projects that combines strategic and technical factors with cost-benefit analysis for integration versus migration decisions. The paper identifies the key issues that need to be addressed in enterprise application reengineering projects for SOA, examines the strategic alternatives, explains how the alternatives can be evaluated based on architectural and cost-benefit considerations and illustrates the main ideas through a detailed case study.  相似文献   

5.
The performance of modern microprocessors considerably depends on the efficient workload of their execution units. The performance in modern applications is considerably affected by instruction stalls. Until recently, the problem of instruction stalls was mainly studied for superscalar microprocessors. A software instruction prefetching method for VLIW/EPIC architectures that makes it possible to improve performance for a certain class of problems is described.  相似文献   

6.
Electronic Government (eGov) is a political priority worldwide. One of the core objectives of eGov is the online public services provision (PSP). However, many of eGov PSP systems fail in realizing their objectives. Enterprise Architectures (EA) could contribute to overcome some of the relevant obstacles. The objective of this paper is to derive a reference requirements set for eGov PSP that can be used in EA development. Aiming at capitalizing on existing knowledge, we conduct a systematic literature review on eGov PSP systems requirements. This results in identifying a unified requirements set, i.e. 186 requirements, and stakeholders set, i.e. 19 stakeholders, for eGov PSP systems. Based on these findings, we determine 16 overview use cases demonstrating the basic functionality of such systems. Our findings are modeled using ArchiMate 2.0 notation. The identified requirements set can be used by virtually any public organization providing public services for developing its own EA. As a result, it can lead to the reduction of eGov PSP project failures, the decrease of software development costs and the improvement of its effectiveness and quality. Furthermore, it can be used as a basis to develop a complete reference EA for the eGov PSP domain.  相似文献   

7.

Context

A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty.

Objective

The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures.

Method

We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication.

Results

Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures.

Conclusion

This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.  相似文献   

8.
Particle filters are able to represent multi-modal beliefs but require a large number of particles in order to do so. The particle filter consists of three sequential steps: the sampling, the importance factor, and the resampling step. Each step processes every particle in oder to acquire the final state estimation. A high number of particles leads to a high processing time, thus reducing the particle filters usefulness for real-time embedded systems. Through parallelization, the processing time can be significantly reduced. However, the resampling step is not easily parallelizable since it requires the importance factor of each particle. In this work, a resampling scheme is proposed which uses virtual particles to solve the parallelization problem of the resampling component. Besides evaluating its performance against the multinomial resampling scheme, it is also implemented on a Xilinx Zynq-7000 FPGA.  相似文献   

9.
10.
A computer-aided design (CAD) method and associated architectures are proposed for linear controllers. The design method and architecture are based on recent results that parameterize all controllers that stabilize a given plant. With this architecture, the design of controllers is a convex programming problem that can be solved numerically. Constraints on the closed-loop system, such as asymptotic tracking, decoupling, limits on peak excursions of variables, step response, settling time, and overshoot, as well as frequency-domain inequalities, are readily incorporated in the design. The minimization objective is quite general, with LQG (linear quadratic Gaussian) H and new l1 types as special cases. The constraints and objective are specified in a control specification language which is natural for the control engineer, referring directly to step responses, noise powers, transfer functions, and so on  相似文献   

11.
Wittneben proposed a technique which determines the structural characteristics of a program's memory referencing behavior based only on a sampling of the complete reference string. This method controls the cost of the measurement by adjusting the sampling rate while simultaneously attempting to determine accurately the referencing behavior as reflected in its working set measurements. Wittneben's method is controlled by parameters with statically determined values. The aim of this paper is to present a modified sampling method in which sampling parameters are updated dynamically according to the actual program referencing behavior. The updating process is controlled by the phase-transition structure of a program. Furthermore, working set measurements of the program are based on modified principles which take into account the most probable sources of errors made during the sampling process. Experiments conducted with synthetic and real reference strings seem to demonstrate the superiority of the modified method in comparison with the technique originally formulated by Wittneben.  相似文献   

12.
To support the transformation of system engineering from the project-based development of highly customer-specific solutions to the reuse and customization of ‘system products’, we integrate a process reference model for reuse- and product-oriented industrial engineering and a process reference model extending ISO/IEC 12207 on software life cycle processes with software- and system-level product management. We synthesize the key process elements of both models to enhance ISO/IEC 15288 on system life cycle processes with product- and reuse-oriented engineering and product management practices as an integrated framework for process assessment and improvement in contexts where systems are developed and evolved as products.  相似文献   

13.
Given the proliferation of layered, multicore- and SMT-based architectures, it is imperative to deploy and evaluate important, multi-level, scientific computing codes, such as meshing algorithms, on these systems. We focus on Parallel Constrained Delaunay Mesh (PCDM) generation. We exploit coarse-grain parallelism at the subdomain level, medium-grain at the cavity level and fine-grain at the element level. This multi-grain data parallel approach targets clusters built from commercially available SMTs and multicore processors. The exploitation of the coarser degree of granularity facilitates scalability both in terms of execution time and problem size on loosely-coupled clusters. The exploitation of medium-grain parallelism allows performance improvement at the single node level. Our experimental evaluation shows that the first generation of SMT cores is not capable of taking advantage of fine-grain parallelism in PCDM. Many of our experimental findings with PCDM extend to other adaptive and irregular multigrain parallel algorithms as well.  相似文献   

14.
The aim of this work is to measure the impact of aspect-oriented programming on software performance. Thus, we hypothesized as follow: adding aspects to a base program will affect its performance because of the overhead caused by the control flow switching, and that incremental effect on performance is more obvious as the number of join points increases. To test our hypotheses we carried out a case study of two concurrent architectures: Half-Sync/Half-Async and Leader/Followers. Aspects are extracted and encapsulated and the base program performance was compared to the aspect program. Our results show that the aspect-oriented approach does not have significant effect on the performance and that in some cases an aspect-oriented program even outperforms the non-aspect program. We also investigated the effect of cache fault rate on performance for both aspect and non-aspect programs. Based on our experiments, the results demonstrate that there is a close correlation between the cache fault rate and performance, which may be in favor of aspect code if some aspects are frequently accessed. Additionally, the introduction of a large number of join points does not have significant effect on performance.  相似文献   

15.
16.
The papers [Campi, Lecchini & Savaresi (2002). Automatica, 38(8), 1337-1346; (2003). European Journal of Control, 9(1), 66-76] present a direct controller synthesis procedure that uses identification algorithms applied to filtered input-output plant data. This contribution discusses variations that, in some cases, may alleviate noise-induced correlation (in the open-loop case) and allow the applicability of the approach to unstable plants. Importantly, it also introduces an invalidation test step based on the available data (i.e., prior to experimental controller testing), to check if the flexibility of the controller parameterisation and the approximations involved are suitable for the design objectives or, on the contrary, the resulting closed loop may be unstable.  相似文献   

17.
Organizations have predominantly utilized reuse in Engineering Departments for the purposes of reducing the cost and improving the quality of the software they develop. While these strategies have been successful, we believe that the full potential of reuse can only be tapped when reuse is brought to the Executive Boardroom as well. We propose that organizations tap reuse not only for cutting costs, but also for strategic and wide-;ranging business initiatives such as entering new markets, increasing agility in response to a dynamic marketplace, and competitive positioning and advantage. In order to do so effectively, organizations must harness the potential of reuse by migrating reuse into the company's business and product-;line planning processes. We present a framework for analyzing and changing reuse business practices. Such practices include cost-;reduction reuse, when the organization utilizes reuse for cost savings purposes; reuse-;enabled business, when the organization uses reuse to create new business opportunities; and strategy-;driven reuse, when the organization incorporates reuse in the formulation of its business and product-;line strategy for the purposes of obtaining competitive positioning and advantage. To determine whether or not reuse is the proper software development strategy to pursue, we utilize concepts in competitive software engineering, an integrated approach to software development that is attuned to the competitive demands of the marketplace. First, a framework is established by identifying and analyzing the organization's goals, strengths, and limitations, its market and its competitive environment. Based on these analyses, possible business or product strategies are formulated and one or more are chosen that help achieve the organization's goals. Finally, a development strategy is chosen. Following this choice, each step of the decision cycle should be re-;evaluated to ensure that it is consistent with the chosen development strategy.  相似文献   

18.
Recently, sparse signal recovery has received a lot attention for its wide real applications. Such a problem can be solved better if using a proper dictionary. Therefore, dictionary learning has become a promising direction and still been an open topic. As one of the greatest potential candidates, K-singular value decomposition (K-SVD) algorithm has been recognized by users. However, its performance has reached limitations of further improvement since it cannot consider the dependence between atoms. In this paper, we mine the inner structure of signals using their autocorrelations and make these prior as the reference. Based on these references, we present a new technique, which incorporates these references to K-SVD algorithm and provide a new method to initialize the dictionary. Experiments on synthetic data and image data show that the proposed algorithm has higher convergence ratio and lower error than the original K-SVD algorithm. Also, it performs better and more stable for sparse signal recovery.  相似文献   

19.
Volumetric spline parameterization and computational efficiency are two main challenges in isogeometric analysis (IGA). To tackle this problem, we propose a framework of computation reuse in IGA on a set of three-dimensional models with similar semantic features. Given a template domain, B-spline based consistent volumetric parameterization is first constructed for a set of models with similar semantic features. An efficient quadrature-free method is investigated in our framework to compute the entries of stiffness matrix by Bézier extraction and polynomial approximation. In our approach, evaluation on the stiffness matrix and imposition of the boundary conditions can be pre-computed and reused during IGA on a set of CAD models. Examples with complex geometry are presented to show the effectiveness of our methods, and efficiency similar to the computation in linear finite element analysis can be achieved for IGA taken on a set of models.  相似文献   

20.
Increasingly, software organisations are looking towards large-;scale reuse as a way of improving productivity, raising quality and reducing delivery timescales. Many in the reuse community have suggested notions of product-;line development and domain engineering life-;cycles. Achieving these in practice, however, requires a systematic process for “early” reuse (requirements reuse) as well as late reuse (code reuse). This paper discusses pratical experience of early reuse. We describe FORE (Family Of REquirements), an approach that we have developed in our work in the domain of aircraft engine control systems. The FORE approach concentrates on the definition of a generic product concept and the formalisation of its requirements. We describe the FORE approach in general terms, and then show how it has been applied in an industrial case-;study. We make an initial evaluation of the FORE approach (and early reuse in general) in terms of how it has changed an existing requirements engineering process. We compare the FORE approach to related work in early reuse, and draw some conclusions about how the approach may scale to other problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号