共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Recently, event processing (EP) has gained considerable attention as an individual discipline in computer science. From a software engineering perspective, EP systems still lack the maturity of well‐established software architectures. For the development of industrial EP systems, generally accepted software architectures based on proven design patterns and principles are still missing. In this article, we introduce a catalog of design patterns that supports the development of event‐driven architectures (EDAs) and complex EP systems. The design principles originate from experiences reported in publications as well as from our own experiences in building EP systems with industrial and academic partners. We present several patterns on different layers of abstractions that define the overall structure as well as the building blocks for EP systems. Architectural patterns that determine the top‐level structure of an EDA can be distinguished from design patterns that specify the basic mechanisms of EP. The practical application of the catalog of patterns is described by the pattern‐based design of a sample EDA for a sensor‐based energy control system. Finally, we propose a coherent and general reference architecture for EP derived from the proposed patterns.Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
3.
Jian-Qiang Hu 《Discrete Event Dynamic Systems》1995,5(2-3):167-186
In this paper we use the event synchronization scheme to develop a new method for parallel simulation of many discrete event dynamic systems simultaneously. Though a few parallel simulation methods have been developed during the last several years, such as the well-known Standard Clock method, most of them are largely limited to Markovian systems. The main advantage of our method is its applicability to non-Markovian systems. For Markovian systems a comparison study on efficiency between our method and the Standard Clock method is done on Connection Machine CM-5. CM-5 is a parallel machine with both SIMD (Single Instruction, Multiple Data) and MIMD (Multiple Instruction Multiple Data) architectures. The simulation results show that if event rates of Markovian systems do not differ by much then both methods are compatible but the Standard Clock method performs better in most cases. For Markovian systems with very different event rates, our method often yields better results. Most importantly, our simulation results also show that our method works as efficiently for non-Markovian systems as for Markovian systems. 相似文献
4.
This paper considers a distributed interference avoidance problem employing frequency assignment in the Gaussian interference channel(IC).We divide the common channel into several sub-channels and each user chooses one sub-channel for transmission in such a way that the total interference in the IC is minimum.This mechanism named interference avoidance in this paper can be modeled as a competitive game model.And a completely autonomous distributed iterative algorithm called distributed interference avoidance algorithm(DIA)is adopted to achieve the Nash equilibrium(NE)of the game.Due to the self-optimum,the DIA is a sub-optimal algorithm.Therefore,through introducing an optimal compensation(or price)into the competitive game model,we successfully develop a compensation-based game model to approximate the optimal interference avoidance problem.Moreover,an optimal algorithm called iterative optimal interference avoidance algorithm(IOIA)is proposed to reach the optimality of the interference avoidance scheme.We analyze the implementation complexity of the proposed algorithm which is only O(N),with N standing for the number of users in the IC.We also give the proof on the convergence of the proposed algorithm.The performance upper bound and lower bound are derived for the IOIA algorithm.The simulation results show that the IOIA does reach the optimality under the condition of interference avoidance mechanism. 相似文献
5.
6.
并行离散事件模拟的同步机制研究 总被引:2,自引:0,他引:2
逻辑模拟在设计新系统的过程中起着重要作用,通过计算机进行模拟可以实时反馈输出结果,及早发现潜在的问题,进而缩短设计周期,降低研发成本。并行离散事件模拟通过分散计算量到并行机或者网络的多个节点来减少模拟时间,被视为解决模拟速度问题的有效途径。在影响模拟性能的因素中,各并行子系统之间的同步问题是直接影响并行性能的关键因素之一。探讨了并行离散事件模拟的同步机制,介绍了其基本原理、特点及存在的问题,并阐述了可能的改进方法。 相似文献
7.
Masked prioritized synchronization for interaction and control of discrete event systems 总被引:1,自引:0,他引:1
Extends the formalism of prioritized synchronous composition (PSC), proposed by Heymann for modeling interaction (and control) of discrete event systems, to permit system interaction with their environment via interface masks. This leads to the notion of masked prioritized synchronous composition (MPSC), which we formally define, MPSC can be used to model interaction of systems at single as well as multiple interfaces. We show that MPSC can alternatively be computed by unmasking the PSC of masked systems, thereby establishing a link between MPSC and PSC. We next prove that MPSC is associative and thus suitable for modeling and analysis of supervisory control of discrete event systems. Finally, we use MPSC of a discrete event plant and a supervisor for controlling the plant behavior and show (constructively) that under the absence of driven events, controllability together with normality of the given specification serve as conditions for the existence of a supervisor. This extends the results on supervisory control, which permits control and observation masks to be associated with the plant only. 相似文献
8.
This article deals with decentralized diagnosis, where a set of diagnosers cooperate for detecting faults in a discrete event system. We propose a new framework, called multi-decision diagnosis, whose basic principle consists in using several decentralized diagnosis architectures working in parallel. We first present a generic form of multi-decision diagnosis, where several decentralized diagnosis architectures work in parallel and combine their global decisions disjunctively or conjunctively. We then study in more detail the inference-based multi-decision diagnosis, that is, in the case where each of the decentralized architectures in parallel is based on the inference-based framework. We develop a method that checks if a given specification is diagnosable under the inference-based multi-decision architecture. We also show that with our method, the worst-case computational complexity for checking codiagnosability for our inference-based multi-decision architecture is in the same order of complexity as checking codiagnosability for the inference-based architecture designed by Kumar and Takai. In fact, multi-decision diagnosis is fundamentally undecidable and we have formulated a decidable variant of it. Multi-decision diagnosis is formally based on language decomposition, but it is worth noting that our objective is not to answer the existential question of language decomposition in the general case. Our objective is rather to propose a decentralized diagnosis architecture that generalizes the decidable existing ones. 相似文献
9.
10.
The game world graph (GWG) framework is a taxonomy for analyzing and classifying computer game architectures. This article presents a systematic review of game architectures using the GWG framework. The review validates the usefulness of the GWG framework through classifying game architectures described in the literature into distinct categories according to the framework. The major contribution of the paper is a state-of-the-art presentation of 40 different game architectures, which covers architectures for all kinds of games from single player games to massively multiplayer online games (MMOGs). Previous reviews of game architectures have focused on a narrower selection of games such as only networked games, MMOGs or similar. Further, none of the previous reviews has used a systematic framework for analyzing the characteristics of game architectures. Using the framework, we can identify similarities and differences of the 40 game architectures in a systematic way. Finally, the paper outlines the evolution of the game architectures from the perspective of the GWG framework. 相似文献
11.
12.
研究了卫星LTE上行同步系统在单粒子效应影响下软件数据流错误的脆弱点识别问题,并针对现有错误传播分析方法对大容量数据处理软件脆弱点识别存在较大偏差的问题,结合星载LTE上行同步处理大容量数据处理需求,引入复杂网络理论,提出一种基于网络节点度的软件数据流脆弱点识别方法.该方法以错误传播分析方法为基础,通过定义单粒子翻转错误渗透率,采用矩阵化描述方法构建了错误传播网络模型,将软件数据流脆弱点挖掘问题转换为网络关键节点挖掘问题,以一定的虚警概率搜索所有的局部极值,从而实现该虚警概率下全部的数据流脆弱点挖掘.仿真结果表明,该方法可有效识别星载LTE上行同步处理大容量数据处理中的脆弱点. 相似文献
13.
In a video-on-demand (VOD) environment, disk arrays are often used to support the disk bandwidth requirement. This can pose
serious problems on available disk bandwidth upon disk failure. In this paper, we explore the approach of replicating frequently
accessed movies to provide high data bandwidth and fault tolerance required in a disk-array-based video server. An isochronous
continuous video stream imposes different requirements from a random access pattern on databases or files. Explicitly, we
propose a new replica placement method, called rotational mirrored declustering (RMD), to support high data availability for disk arrays in a VOD environment. In essence, RMD is similar to the conventional
mirrored declustering in that replicas are stored in different disk arrays. However, it is different from the latter in that
the replica placements in different disk arrays under RMD are properly rotated. Combining the merits of prior chained and
mirrored declustering methods, RMD is particularly suitable for storing multiple movie copies to support VOD applications.
To assess the performance of RMD, we conduct a series of experiments by emulating the storage and delivery of movies in a
VOD system. Our results show that RMD consistently outperforms the conventional methods in terms of load-balancing and fault-tolerance
capability after disk failure, and is deemed a viable approach to supporting replica placement in a disk-array-based video
server. 相似文献
14.
This paper describes low cost dynamic architectures for microcomputers. The proposed architectures allow the CPU to be simultaneously active during DMA block I/O operations. As a result, a considerable speed advantage may be obtained at very little extra cost. An experimental system has been built and successfully tested. Several computational environments have been simulated on the prototype system. Some alternative designs with varying degrees of flexibility and hardware cost are discussed. The designs are aimed primarily at total overlap of I/O interfaces and processor activities. Emulation data show that in many application environments the proposed architectures could provide substantially higher throughputs than conventional static architectures. 相似文献
15.
The growth of machine-generated relational databases, both in the sciences and in industry, is rapidly outpacing our ability
to extract useful information from them by manual means. This has brought into focus machine learning techniques like Inductive
Logic Programming (ILP) that are able to extract human-comprehensible models for complex relational data. The price to pay
is that ILP techniques are not efficient: they can be seen as performing a form of discrete optimisation, which is known to
be computationally hard; and the complexity is usually some super-linear function of the number of examples. While little
can be done to alter the theoretical bounds on the worst-case complexity of ILP systems, some practical gains may follow from
the use of multiple processors. In this paper we survey the state-of-the-art on parallel ILP. We implement several parallel
algorithms and study their performance using some standard benchmarks. The principal findings of interest are these: (1) of
the techniques investigated, one that simply constructs models in parallel on each processor using a subset of data and then
combines the models into a single one, yields the best results; and (2) sequential (approximate) ILP algorithms based on randomized
searches have lower execution times than (exact) parallel algorithms, without sacrificing the quality of the solutions found.
This is an extended version of the paper entitled Strategies to Parallelize ILP Systems, published in the Proceedings of the 15th International Conference on Inductive Logic Programming (ILP 2005), vol. 3625 of LNAI, pp. 136–153, Springer-Verlag. 相似文献
16.
Video compositing, the editing and integrating of many video sequences into a single presentation, is an integral part of advanced multimedia services. Single-user compositing systems have been suggested in the past, but when they are extended to accommodate many users, the amount of memory required quickly grows out of hand. We propose two new architectures for digital video compositing in a multiuser environment that are memory-efficient and can operate in real time. Both architectures decouple the task of memory management from compositing processing. We show that under hard throughput and bandwidth constraints, a memory less solution for transferring data from many video sources to many users does not exist. We overcome this using (i) a dynamic memory buffering architecture and (ii) a constant memory bandwidth solution that transforms the sources-to-users transfer schedule into two schedules, then pipelines the computation. The architectures support opaque overlapping of images, arbitrarily shaped images, and images whose shapes dynamically change from frame to frame. 相似文献
17.
We study the synthesis problem for external linear or branching specifications and distributed, synchronous architectures with arbitrary delays on processes. External means that the specification only relates input and output variables. We introduce the subclass of uniformly well-connected (UWC) architectures for which there exists a routing allowing each output process to get the values of all inputs it is connected to, as soon as possible. We prove that the distributed synthesis problem is decidable on UWC architectures if and only if the output variables are totally ordered by their knowledge of input variables. We also show that if we extend this class by letting the routing depend on the output process, then the previous decidability result fails. Finally, we provide a natural restriction on specifications under which the whole class of UWC architectures is decidable. 相似文献
18.
The lack of high-level languages and good compilers for parallel machines hinders their widespread acceptance and use. Programmers must address issues such as process decomposition, synchronization, and load balancing. We have developed a parallelizing compiler that, given a sequential program and a memory layout of its data, performs process decomposition while balancing parallelism against locality of reference. A process decomposition is obtained by specializing the program for each processor to the data that resides on that processor. If this analysis fails, the compiler falls back to a simple but inefficient scheme called run-time resolution. Each process's role in the computation is determined by examining the data required for execution at run-time. Thus, our approach to process decomposition is data-driven rather than program-driven. We discuss several message optimizations that address the issues of overhead and synchronization in message transmission. Accumulation reorganizes the computation of a commutative and associative operator to reduce message traffic. Pipelining sends a value as close to its computation as possible to increase parallelism. Vectorization of messages combines messages with the same source and the same destination to reduce overhead. Our results from experiments in parallelizing SIMPLE, a large hydrodynamics benchmark, for the Intel iPSC/2, show a speedup within 60% to 70% of handwritten code 相似文献
19.
A taxonomy is presented that extends M.J. Flynn's (IEEE Trans.Comput., vol. C-21, no.9, p.948-60, Sept. 1972), especially in the multiprocessor category. It is a two-level hierarchy in which the upper level classifies architectures based on the number of processors for data and for instructions and the interconnections between them. A lower level can be used to distinguish variants even more precisely; it is based on a state-machine view of processors. The author suggests why taxonomies are useful in studying architecture and shows how this applies to a number of modern architectures 相似文献
20.
Chiahon Chien 《Performance Evaluation》1993,18(3):175-188
Cumulative distribution functions and probability density functions for the minimum and maximum seek distances in disk systems with two independent arms are derived. Comparison is made with conventional one-arm systems. Different effects on the average seek distance and the average seek time due to the availability of the extra arm are discussed. In particular, a realistic example showing the marked difference for the impacts on the average seek distance and average seek time is given. Practical usage of these probability density functions in conjunction with actual measurement data are demonstrated. Efficient formulas for computing the average seek time from measurement data are given. Experimental results for validating the proposed cumulative distribution functions are reported. 相似文献