首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The high-level architecture (HLA) standard developed by the Department of Defense in the United States is a key technology to perform distributed simulation. Inside the HLA framework, many different simulators (termed federates) may be interconnected to create a single more complex simulator (federation). Data distribution management (DDM) is an optional subset of services that controls which federates should receive notification of state modifications made by other federates. A simple DDM implementation will usually generate much more traffic than needed, whereas a complex one might introduce too much overhead. In this work, we describe an approach to DDM that delegates a portion of the DDM computation to a processor on the network card in order to provide more CPU time for other federate and Runtime Infrastructure (RTI) computations while still being able to exploit the benefits of a complex DDM implementation to reduce the amount of information exchange.  相似文献   

2.
3.
Dynamic decision-making (DDM) research grew out of a perceived need for understanding how people control dynamic, complex, real-world systems. DDM has describable characteristics and, with some unavoidable sacrifice of realism, is suitable for study in a laboratory setting through the use of complex computer simulations commonly called `microworlds'. This paper presents a taxonomic definition of DDM, an updated review of existing microworlds and their characteristics, and a set of cognitive demands imposed by DDM tasks. Although the study of DDM has garnered little attention to date, we believe that both technological advancement and the relationships between DDM and naturalistic decision making, complex problem solving, and general systems theory have made DDM a viable process by which to study how people make decisions in dynamic, real-world environments.  相似文献   

4.
数据分发管理机制中层次化组播地址分配策略   总被引:12,自引:1,他引:12  
高层体系结构HLA(high level architecture)支持仿真应用间的互操作和可重用,受资源的限制,基于因特网的分布仿真面临着系统可扩缩性的挑战.HLA提供数据分发管理机制,为提高系统可扩缩性提供了可能.分析了HLA中数据分发管理机制的实现途径,针对存在的问题提出了层次化的组播地址分配策略,解决了组播地址数量有限与仿真结点机冗余数据接收之间的矛盾,同时也为仿真数据的可靠发送及数据打包提供了有力的支持.  相似文献   

5.
Data distribution management (DDM) is one of the services defined by the DoD High Level Architecture and is necessary to provide efficient, scalable mechanisms for distributing state updates and interaction information in large scale distributed simulations. In this paper, we focus on data distribution management mechanisms (also known as filtering) used for real time training simulations. We propose a new method of DDM, which we refer to as the dynamic grid-based approach. Our scheme is based on a combination of a fixed grid-based method, known for its scalability, and a region-based strategy, which provides greater accuracy than the fixed grid-based method. We describe our DDM algorithm, its implementation, and report on the performance results that we have obtained using the RTI-Kit framework. Our results clearly indicate that our scheme is scalable and that it reduces the message overhead by 40%, and the number of multicast groups used by 98% when compared to the fixed grid-based allocation scheme using 10 nodes, 1000 objects, and 20,000 grid cells.  相似文献   

6.
The high level architecture (HLA) is a standard for federations of distributed simulations that exchange run-time data. HLA's data distribution management (DDM) services reduce data delivered to simulations based on their declarations of data produced and required. The HLA specifications, including DDM, were changed substantially from the Department of Defense 1.3 standard to the IEEE 1516 standard. The two DDM specifications' (DDM 1.3 and DDM 1516) power to define intersimulation data flows are compared. A transformation from DDM 1.3 to DDM 1516 configurations and a mapping from DDM 1516 to DDM 1.3 configurations prove that the DDM specifications are equivalently powerful.  相似文献   

7.
Data distribution management (DDM) is one of the most critical component of any large-scale interactive distributed simulation systems. The aim of DDM is to reduce and control the volume of information exchanged among the simulated entities (federates) in a large-scale distributed simulation system. In order to fulfill its goal, a considerable amount of DDM messages needs to be exchanged within the simulation (federation). The question of whether each message should be sent immediately after it is generated or held until it can be grouped with other DDM messages needs to be investigated further. Our experimental results have shown that the total DDM time of a simulation varies considerably depending on which transmission strategy is used. Moreover, in the case of grouping, the DDM time depends on the size of the group. In this paper, we propose a novel DDM approach, which we refer to as Adaptive Grid-based (AGB) DDM. The AGB protocol is distinct from all existing DDM implementations, because it is able to predict the average amount of data generated in each time step of a simulation. Therefore, the AGB DDM approach controls a simulation running in the most appropriate mode to achieve a desired performance. This new DDM approach consists of two adaptive control parts: 1) the Adaptive Resource Allocation Control (ARAC) scheme and 2) the Adaptive Transmission Control (ATC) scheme. The focus of this paper is on the ATC scheme. We describe how to build a switching model to predict the average amount of DDM messages generated and how the ATC scheme uses this estimation result to optimize the overall DDM time. Our experimental results provide a clear evidence that the ATC scheme is able to achieve the best performance in DDM time when compared to all existing DDM protocols using an extensive set of experimental case studies.  相似文献   

8.
9.

Whereas topology optimization has achieved immense success, it involves an intrinsic difficulty. That is, optimized structures obtained by topology optimization strongly depend on the settings of the objective and constraint functions, i.e., the formulation. Nevertheless, the appropriate formulation is not usually obvious when considering structural design problems. Although trial-and-error to determine appropriate formulations are implicitly performed in several studies on topology optimization, it is important to explicitly support the process of trial-and-error. Therefore, in this study, we propose a new framework for topology optimization to determine appropriate formulations. The basic idea of this framework is incorporating knowledge discovery in databases (KDD) and topology optimization. Thus, we construct a database by collecting various and numerous material distributions that are obtained by solving various structural design problems with topology optimization, and find useful knowledge with respect to appropriate formulations from the database on the basis of KDD. An issue must be resolved when realizing the above idea, namely the material distribution in the design domain of a data record must be converted to conform to the design domain of the target design problem wherein an appropriate formulation should be determined. For this purpose, we also propose a material distribution-converting method termed as design domain mapping (DDM). Several numerical examples are used to demonstrate that the proposed framework including DDM successfully and explicitly supports the process of trial-and-error to determine the appropriate formulation.

  相似文献   

10.
针对现有的通用数据模型与具体的应用结合不紧密的特点,提出了构造更贴近具体应用领域的域数据模型,用于解决MIS开发过程中,应用层面的数据访问控制的难题。提出并描述了数据域模型的概念,详细讨论了域模型的形式化表达、域关系运算和域运算等概念。  相似文献   

11.
Data distribution management (DDM) plays a key role in traffic control for large-scale distributed simulations. In recent years, several solutions have been devised to make DDM more efficient and adaptive to different traffic conditions. Examples of such systems include the region-based, fixed grid-based, and dynamic grid-based (DGB) schemes, as well as grid-filtered region-based and agent-based DDM schemes. However, less effort has been directed toward improving the processing performance of DDM techniques. This paper presents a novel DDM scheme called the adaptive dynamic grid-based (ADGB) scheme that optimizes DDM time through the analysis of matching performance. ADGB uses an advertising scheme in which information about the target cell involved in the process of matching subscribers to publishers is known in advance. An important concept known as the distribution rate (DR) is devised. The DR represents the relative processing load and communication load generated at each federate. The DR and the matching performance are used as part of the ADGB method to select, throughout the simulation, the devised advertisement scheme that achieves the maximum gain with acceptable network traffic overhead. If we assume the same worst case propagation delays, when the matching probability is high, the performance estimation of ADGB has shown that a maximum efficiency gain of 66% can be achieved over the DGB scheme. The novelty of the ADGB scheme is its focus on improving performance, an important (and often forgotten) goal of DDM strategies.  相似文献   

12.
在基于 HLA/RTI 的大规模交互仿真中,如何高效地实现数据分发管理的信息交互和传递机制是分布式交互仿真的重要内容。介绍了 HLA 仿真中的联邦开发和执行过程模型和数据分发管理,提出了一种新的数据分发管理算法,并阐述了该算法的理论和具体实现方法。通过分析采用该算法的数据分发管理仿真系统,证明该算法有效的提高了数据过滤效率,从而缩短系统仿真时间。  相似文献   

13.
赵德群  孙光民 《计算机仿真》2009,26(10):142-144,232
针对传统数据库识别中,异常识别方法对磁盘读写次数的解决效果不佳的问题,为防止非法使用数据,提出一种基于DDM的数据库累积异常识别方法。根据数据库模式以及涉及的数据,通过隶属度函数对数据库的可疑度在一定区间内进行测定,提供DDM有效的识别数据库累积异常的方法进行实验。实验结果表明在DDM、无DDM、单客户端和并行客户端的情况下,基于DDM的数据库累积异常主动模糊识别方法对磁盘写次数几乎没有影响,结果证明方法能是一种识别数据库累积异常的有效方法。  相似文献   

14.
DDM-a cache-only memory architecture   总被引:1,自引:0,他引:1  
Hagersten  E. Landin  A. Haridi  S. 《Computer》1992,25(9):44-54
The Data Diffusion Machine (DDM), a cache-only memory architecture (COMA) that relies on a hierarchical network structure, is described. The key ideas behind DDM are introduced by describing a small machine, which could be a COMA on its own or a subsystem of a larger COMA, and its protocol. A large machine with hundreds of processors is also described. The DDM prototype project is discussed, and simulated performance results are presented.<>  相似文献   

15.
在大规模分布式交互仿真中,数据分发管理(DDM)的重要功能是减少联邦成员接收不相关数据,实现数据过滤。它允许联邦成员在路径空间中通过更新区域或订购区域表达它们要发送或接收数据的范围,通过区域匹配运算确定数据供求关系,实现数据过滤。其关键是如何减少需要匹配的区域,以减少区域匹配运算量,文章以此为目的,提出一种基于网格的区域匹配算法。  相似文献   

16.
We consider the numerical simulation of buoyancy-affected, incompressible turbulent flows using a stabilized finite-element method. We present an approach which combines two domain decomposition methods (DDM). Firstly, we apply a DDM with full overlap for near-wall modelling, which can be interpreted as an improved wall-function concept. Secondly, a non-overlapping DDM of iteration-by-subdomains-type for the parallel solution of the linearized problems is employed. For this scheme, we demonstrate both the accuracy for a benchmark problem and the applicability to realistic indoor-air flow problems.  相似文献   

17.
一种混合的动态DDM实现方法   总被引:1,自引:0,他引:1  
张霞  黄莎白 《计算机工程》2003,29(20):14-15,179
介绍了HLA中数据分发管理DDM的基本内容和过程,分析了目前两种经典的DDM实现方法;在此基础上综合了现有方法的优点,提出了一种混合的动态的DDM实现方法,提高了区域匹配的精度,降低了网络资源的消耗,对DDM方法进行了改进。  相似文献   

18.
Data mining research currently faces two great challenges: how to embrace data mining services with just-in-time and autonomous properties and how to mine distributed and privacy-protected data. To address these problems, the authors adopt the Business Process Execution Language for Web Services in a service oriented distributed data mining (DDM) platform to choreograph DDM component services and fulfill global data mining requirements. They also use the learning-from-abstraction methodology to achieve privacy-preserving DDM. Finally,they illustrate how localized autonomy on privacy-policy enforcement plusa bidding process can help the service-oriented system self-organize.  相似文献   

19.
该文简要介绍了HLA中DDM存在的必要性 ,之后从一般分布式的观点分析了HLA及DDM实现的原理 ,同时给出了存在DDM的情况下 ,HLA中数据公布和订购的基本流程。最后介绍了DDM已知的实现方法和我们的实现方法 ,同时详细地给出了对象属性和交互的请求应答机制。  相似文献   

20.
Multi-agent systems (MAS) offer an architecture for distributed problem solving. Distributed data mining (DDM) algorithms focus on one class of such distributed problem solving tasks—analysis and modeling of distributed data. This paper offers a perspective on DDM algorithms in the context of multi-agents systems. It discusses broadly the connection between DDM and MAS. It provides a high-level survey of DDM, then focuses on distributed clustering algorithms and some potential applications in multi-agent-based problem solving scenarios. It reviews algorithms for distributed clustering, including privacy-preserving ones. It describes challenges for clustering in sensor-network environments, potential shortcomings of the current algorithms, and future work accordingly. It also discusses confidentiality (privacy preservation) and presents a new algorithm for privacy-preserving density-based clustering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号