首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
The architectures of high-end embedded system have evolved into heterogeneous distributed integrated architectures. The scheduling of multiple distributed mixed-criticality functions in heterogeneous distributed embedded systems is a considerable challenge because of the different requirements of systems and functions. Overall scheduling length (i.e., makespan) is the main concern in system performance, whereas deadlines represent the major timing constraints of functions. Most algorithms use the fairness policies to reduce the makespan in heterogeneous distributed systems. However, these fairness policies cannot meet the deadlines of most functions. Each function has different criticality levels (e.g., severity), and missing the deadlines of certain high-criticality functions may cause fatal injuries to people under this situation. This study first constructs related models for heterogeneous distributed embedded systems. Thereafter, the criticality certification, scheduling framework, and fairness of multiple heterogeneous earliest finish time (F_MHEFT) algorithm for heterogeneous distributed embedded systems are presented. Finally, this study proposes a novel algorithm called the deadline-span of multiple heterogeneous earliest finish time (D_MHEFT), which is a scheduling algorithm for multiple mixed-criticality functions. The F_MHEFT algorithm aims at improving the performance of systems, while the D_MHEFT algorithm tries to meet the deadlines of more high-criticality functions by sacrificing a certain performance. The experimental results demonstrate that the D_MHEFT algorithm can significantly reduce the deadline miss ratio (DMR) and keep satisfactory performance over existing methods.  相似文献   

2.
This paper addresses the design, development and implementation of distributed process control systems. A novel object-oriented environment is presented that supports the development lifecycle phases. The core of this environment is the integrated design notation (IDN), which is based on the unified modelling language (UML). Its emphasis is on open architectures and thus it uses open standards such as UML (for software engineering), IEC 61131-3 (for process control) and SIMULINK (for control engineering). Automated translation of models and languages across the different disciplines is created with the IDN. Also, the source code implementation phase is produced in Java, thus achieving greater portability for control systems.  相似文献   

3.
Systematic discovery and exploitation of commonality across related software systems is a fundamental technical requirement for achieving successful software reuse. By examining a class/family of related systems and the commonality underlying those systems, it is possible to obtain a set of reference models, i.e., software architectures and components needed for implementing applications in the class. FORM (Feature-;Oriented Reuse Method) supports development of such reusable architectures and components (through a process called the “domain engineering”) and development of applications using the domain artifacts produced from the domain engineering. FORM starts with an analysis of commonality among applications in a particular domain in terms of services, operating environments, domain technologies, and implementation techniques. The model constructed during the analysis is called a “feature” model, and it captures commonality as an AND/OR graph, where AND nodes indicate mandatory features and OR nodes indicate alternative features selectable for different applications. Then, this model is used to define parameterized reference architectures and appropriate reusable components instantiatable during application development. Architectures are defined from three different viewpoints (subsystem, process, and module) and have intimate association with the features. The subsystem architecture is used to package service features and allocate them to different computers in a distributed environment. Each subsystem is further decomposed into processes considering the operating environment features. Modules are defined based on the features on domain technology and implementation techniques. These architecture models that represent an architecture at different levels of abstraction are derived from the feature hierarchy captured in the feature model. Modules serve as basis for creating reusable components, and their specification defines how they are integrated into the application (e.g., as-;is integration of pre-;coded component, instantiation of parameterized templates, and filling-;in skeletal codes). Our experiences have shown that for the electronic bulletin board and the private branch exchange (PBX) domains, “features” make up for a common domain language and the main communication medium among application users and developers. Thus, the feature model well represents a “decision space” of software development, and is a good starting point for identifying candidate reusable components.  相似文献   

4.
A distributed program is a collection of several processes which execute concurrently, possibly in different nodes of a distributed system, and which cooperate with each other to realize a common goal. In this paper, we present a design of communication and synchronization primitives for distributed programs. The primitives are designed such that they can be provided by a kernel of a distributed operating system. An important feature of the design is that the configuration of a process, i.e., identities of processes with which the process communicates, is specified separately from the computation performed by the process. This permits easy configuration and reconfiguration of processes. We identify different kinds of communication failures, and provide distinct mechanisms for handling them. The communication primitives are not atomic actions. To enable the construction of atomic actions, two new program components, atomic agent and manager are introduced. These are devoid of policy decisions regarding concurrency control and atomic commitment. We introduce the notion of conflicts relation using which a designer can construct either an optimistic or a pessimistic concurrency control scheme. The design also incorporates primitives for constructing nested atomic actions.  相似文献   

5.
A game-based model of reflexive decision-making is proposed. Reflexive normal form game being repeated several times, some of the agents (or even all of them) are demonstrated to observe unexpected results of the game (e.g., actions of the opponents, values of payoff functions, etc); thus, awareness structure is modifled. We discuss three problems, notably, awareness structure, actions of the agents based on the structure, transformation of awareness structure.  相似文献   

6.
Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and best-case parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.  相似文献   

7.
Today's distributed computing systems incorporate different types of nodes with varied bandwidth constraints which should be considered while designing cost-optimal job allocation schemes for better system performance. In this paper, we propose a fair pricing strategy for job allocation in bandwidth-constrained distributed systems. The strategy formulates an incomplete information, alternating-offers bargaining game on two variables, such as price per unit resource and percentage of bandwidth allocated, for both single and multiclass jobs at each node. We present a cost-optimal job allocation scheme for single-class jobs that involve communication delay and, hence, the link bandwidth. For fast and adaptive allocation of multiclass jobs, we describe three efficient heuristics and compare them under different network scenarios. The results show that the proposed algorithms are comparable to existing job allocation schemes in terms of the expected system response time over all jobs  相似文献   

8.
Chordal rings have been proposed in the past as networks that combine the simple routing framework of rings with the lower diameter, wider bisection, and higher resilience of other architectures. Virtually all proposed chordal ring networks are node-symmetric, i.e., all nodes have the same in/out degree and interconnection pattern. Unfortunately, such regular chordal rings are not scalable. In this paper, periodically regular chordal (PRC) ring networks are proposed as a compromise for combining low node degree with small diameter. By varying the PRC ring parameters, one can obtain architectures with significantly different characteristics (e.g., from linear to logarithmic diameter), while maintaining an elegant framework for computation and communication. In particular, a very simple and efficient routing algorithm works for the entire spectrum of PRC rings thus obtained. This flexibility has important implications for key system attributes such as architectural satiability, software portability, and fault tolerance. Our discussion is centered on unidirectional PRC rings with in/out-degree of 2. We explore the basic structure, topological properties, optimization of parameters, VLSI layout, and scalability of such networks, develop packet and wormhole routing algorithms for them, and briefly compare them to competing fixed-degree architectures such as symmetric chordal rings, meshes, tori, and cube-connected cycles  相似文献   

9.
In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward-based (e.g., TD learning) and correlation-based (Hebbian) learning related? and How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? We first compare the different models in an open-loop condition, where behavioral feedback does not alter the learning. Here we observe that reward-based and correlation-based learning are indeed very similar. Machine control is then used to introduce the problem of closed-loop control (e.g., actor-critic architectures). Here the problem of evaluative (rewards) versus nonevaluative (correlations) feedback from the environment will be discussed, showing that both learning approaches are fundamentally different in the closed-loop condition. In trying to answer the second question, we compare neuronal versions of the different learning architectures to the anatomy of the involved brain structures (basal-ganglia, thalamus, and cortex) and the molecular biophysics of glutamatergic and dopaminergic synapses. Finally, we discuss the different algorithms used to model STDP and compare them to reward-based learning rules. Certain similarities are found in spite of the strongly different timescales. Here we focus on the biophysics of the different calcium-release mechanisms known to be involved in STDP.  相似文献   

10.
This paper makes two contributions. First, we introduce a model for evaluating the performance of data allocation and replication algorithms in distributed databases. The model is comprehensive in the sense that it accounts for I/O cost, for communication cost, and, because of reliability considerations, for limits on the minimum number of copies of the object. The model captures existing replica-management algorithms, such as read-one-write-all, quorum-consensus, etc. These algorithms are static in the sense that, in the absence of failures, the copies of each object are allocated to a fixed set of processors. In modern distributed databases, particularly in mobile computing environments, processors will dynamically store objects in their local database and will relinquish them. Therefore, as a second contribution of this paper, we introduce an algorithm for automatic dynamic allocation of replicas to processors. Then, using the new model, we compare the performance of the traditional read-one-write-all static allocation algorithm to the performance of the dynamic allocation algorithm. As a result, we obtain the relationship between the communication cost and I/O cost for which static allocation is superior to dynamic allocation, and the relationships for which dynamic allocation is superior  相似文献   

11.
User profile has contributed to customize user access and adjusts applications to its needs. In this respect, automatically building of user profiles issue is an important research area. Nevertheless, standardizing these profiles in terms of representation and acquisition schemes, more especially in large scale systems like Peer-to-Peer systems (P2P), is a complex task. In this paper, we introduce a distributed user profile modelling approach based on user search topics history without the need of any external knowledge resource (e.g., ontology). This model learns from past interests to guess correlations between user requests, associated topics, relevant documents and nodes (i.e., peers) to enhance any information retrieval process. The solution is based on an extension of Formal Concept Analysis (FCA) theory. We also study, the integration of our model in query routing (i.e., content discovery) and results aggregation processes for P2P systems. Carried out experiments, performed under a P2P simulator environment, showed that our model outperforms its competitors in terms of effectiveness and efficiency.  相似文献   

12.
The purpose of this article is to make a contribution to a more effective management for supply chains and networks, which we are subsuming under the title supply systems. We conceive of supply systems management as the design, control, and development of logistics along the value chain or in value networks. In this article, we concentrate on control, i.e., the regulation and steering of supply systems from production to customer and vice versa, with the help of system dynamics modeling and simulation. Traditionally, supply chain management has been heavily supported by discrete event simulation and optimization models on very detailed levels. Established tools, e.g., Manufacturing Resources Planning, Enterprise Resources Planning, and Production Planning Systems, have emphasized highly specialized functions, such as the planning for optimal capacity use and achievement of delivery goals, via the sequencing, scheduling, and dispatching of individual orders, or the global allocation of resources but without the possibility to evaluate different outcomes of the allocation process. In the quest for efficiency and effectiveness, new approaches to control, which lead beyond these functions of disposition, are needed. We present a model that combines two systemic methodologies that operate on higher levels of aggregation and complement each other: system dynamics to model and simulate the supply chain and cybernetic control to apply control-theoretical concepts, namely proportional, derivative, and integral control, in a combined mode as well as according to a recursive logic of distributed control. This way, substantial improvements in both efficiency and effectiveness can be achieved, and ultimately organizational viability can be enhanced.  相似文献   

13.
The application of the object-attribute (OA) architecture of computing environment to implementation of distributed automation systems with computational nodes (computers or PLCs) of different hardware architectures is described. The features of OA modeling of distributed automation tools and the main techniques for modeling, programming, and debugging of such systems are shown.  相似文献   

14.
State-centric programming for sensor-actuator network systems   总被引:2,自引:0,他引:2  
Distributed embedded systems such as wireless sensor and actuator networks require new programming models and software tools to support the rapid design and prototyping of sensing and control applications. Unlike centralized platforms and Web-based distributed systems, these distributed sensor-actuator network (DSAN) systems are characterized by a massive number of potentially failing nodes, limited energy and bandwidth resources, and the need to rapidly respond to sensor input. We describe a state-centric, agent-based design methodology to mediate between a system developer's mental model of physical phenomena and the distributed execution of DSAN applications. Building on the ideas of data-centric networking, sensor databases, and proximity-based group formation, we introduce the notion of collaboration groups, which abstracts common patterns in application-specific communication and resource allocation. Using a distributed tracking application with sensor networks, we'll demonstrate how state-centric programming can raise the abstraction level for application developers.  相似文献   

15.
The deployment of network applications (e.g., multimedia and other real time applications) has put extra pressure on network scarce resources (bandwidth and buffers) thus generating a need for effective resource allocation and management. In this paper we adopt a stochastic fluid model (SFM) framework and derive sensitivity estimators for three performance measures of interest (workload, throughput and loss volume) with respect to the fluid inflow process parameters. The motivation is to use the sensitivity estimators for dynamic control and optimisation of the systems performance. Subsequently these estimators are evaluated based on data observed from a single sample path of the discrete-event system and are used to dynamically control the input process to the system allowing the network to work continuously at an optimal or near optimal point. The proposed analysis naturally leads to a distributed algorithm for evaluating the propagation of perturbations in a network (e.g., due to changes in the buffer size of upstream nodes).  相似文献   

16.
ContextService-oriented architecture has become a widely used concept in software industry. However, we currently lack support for designing variability-intensive service-oriented systems. Such systems could be used in different environments, without the need to design them from scratch. To support the design of variability-intensive service-oriented systems, reference architectures that facilitate variability in instantiated service-oriented architectures can help.ObjectiveThe design of variability-intensive service-oriented reference architectures is subject to specific constraints. Architects need to know these constraints when designing such reference architectures. Our objective is to identify these constraints.MethodAn exploratory case study was performed in the context of local e-government in the Netherlands to study constraints from the perspective of (a) the users of a variability-intensive service-oriented system (municipalities that implement national laws), and (b) the implementing organizations (software vendors). We collected data through interviews with representatives from five organizations, document analyses and expert meetings.ResultsWe identified ten constraints (e.g., organizational constraints, integration-related constraints) which affect the process of designing reference architectures for variability-intensive service-oriented systems. Also, we identified how stakeholders are affected by these constraints, and how constraints are specific to the case study domain.ConclusionsOur results help design variability-intensive service-oriented reference architectures. Furthermore, our results can be used to define processes to design such reference architectures.  相似文献   

17.
分布式仿真系统越来越多地被应用于国防科技,生产过程控制,和企业供应链管理中。这篇文章侧重与比较两种常用的支持分布式计算和仿真的架构CORBA和RMI。尽管它们的基本结构是相似的,它们之间仍然有着一些不同之处。这些不同之出可以很深远地影响我们设计开发分布式计算和仿真系统的应用。  相似文献   

18.
The core logic of web applications that suggest some particular service, such as online shopping, e-commerce etc., is typically captured by Business Processes (BPs). Among all the (maybe infinitely many) possible execution flows of a BP, analysts are often interested in identifying flows that are “most important”, according to some weight metric. The goal of the present paper is to provide efficient algorithms for top-k query evaluation over the possible executions of Business Processes, under some given weight function. Unique difficulties in top-k analysis in this settings stem from (1) the fact that the number of possible execution flows of a given BP is typically very large, or even infinite in presence of recursion and (2) that the weights (e.g., likelihood, monetary cost, etc.) induced by actions performed during the execution (e.g., product purchase) may be inter-dependent (due to probabilistic dependencies, combined discount deals etc.). We exemplify these difficulties, and overcome them to provide efficient algorithms for query evaluation where possible. We also describe in details an application prototype that we have developed for recommending optimal navigation in an online shopping web site that is based on our model and algorithms.  相似文献   

19.
The growing complexity of today’s computing systems requires a large amount of administration, which poses a serious challenging task for manual administration. Therefore, new ways have to be found to autonomously manage them. They should be characterized by so-called self-x properties such as self-configuration, self-optimization, self-healing and self-protection. The autonomous assignment of services to nodes in a distributed way is a crucial part for developing self-configuring systems. In this paper, we introduce a self-configuration algorithm for Organic Computing systems, which aims on the one hand to equally distribute the load of services on nodes as in a typical load balancing scenario and on the other hand to assign services with different importance levels to nodes so that the more important services are assigned to more trustworthy nodes. Furthermore, the proposed algorithm includes a fault handling mechanism enabling the system to continue hosting services even in the presence of faults. The evaluation indicates that the proposed approach is suitable for large scale and distributed systems.  相似文献   

20.

In the state-of-the-art parallel programming approaches OpenCL and CUDA, so-called host code is required for program’s execution. Efficiently implementing host code is often a cumbersome task, especially when executing OpenCL and CUDA programs on systems with multiple nodes, each comprising different devices, e.g., multi-core CPU and graphics processing units; the programmer is responsible for explicitly managing node’s and device’s memory, synchronizing computations with data transfers between devices of potentially different nodes and for optimizing data transfers between devices’ memories and nodes’ main memories, e.g., by using pinned main memory for accelerating data transfers and overlapping the transfers with computations. We develop distributed OpenCL/CUDA abstraction layer (dOCAL)—a novel high-level C++ library that simplifies the development of host code. dOCAL combines major advantages over the state-of-the-art high-level approaches: (1) it simplifies implementing both OpenCL and CUDA host code by providing a simple-to-use, high-level abstraction API; (2) it supports executing arbitrary OpenCL and CUDA programs; (3) it allows conveniently targeting the devices of different nodes by automatically managing node-to-node communications; (4) it simplifies implementing data transfer optimizations by providing different, specially allocated memory regions, e.g., pinned main memory for overlapping data transfers with computations; (5) it optimizes memory management by automatically avoiding unnecessary data transfers; (6) it enables interoperability between OpenCL and CUDA host code for systems with devices from different vendors. Our experiments show that dOCAL significantly simplifies the development of host code for heterogeneous and distributed systems, with a low runtime overhead.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号