首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many data-centric storage techniques, each event corresponds to a hashing location by event type. However, most of them fail to deal with storage memory space due to high percentage of the load is assigned to a relatively small portion of the sensor nodes. Hence, these nodes may fail to deal with the storage of the sensor nodes effectively. To solve the problem, we propose a grid-based dynamic load balancing approach for data-centric storage in sensor networks that relies on two schemes: (1) a cover-up scheme to deal with a problem of a storage node whose memory space is depleted. This scheme can adjust the number of storage nodes dynamically; (2) the multi-threshold levels to achieve load balancing in each grid and all nodes get load balancing. Simulations have shown that our scheme can enhance the quality of data and avoid hotspot of the storage while there are a vast number of the events in a sensor network.  相似文献   

2.
Emerging network scenarios call for innovative open service frameworks to ensure capability of self-adaptability and long-lasting evolvability. In this paper, we assess the need for such innovative service frameworks, and discuss how their engineering should get inspiration from natural ecosystems, i.e., by modelling services as autonomous individuals in an ecosystem of other services and data sources. We introduce a reference conceptual architecture with the goal of clarifying the concepts expressed and framing the several possible nature-inspired metaphors that could be adopted to realise the idea. On this basis, we go into details about one of such possible approaches, in which the rules governing the ecosystem are inspired by biochemical mechanisms. A case study is also introduced to exemplify the potentials of the presented biochemical approach and to experiment with some representative biochemistry-inspired patterns of adaptive service organisation and evolution.  相似文献   

3.
In a distributed stream processing system, streaming data are continuously disseminated from the sources to the distributed processing servers. To enhance the dissemination efficiency, these servers are typically organized into one or more dissemination trees. In this paper, we focus on the problem of constructing dissemination trees to minimize the average loss of fidelity of the system. We observe that existing heuristic-based approaches can only explore a limited solution space and hence may lead to sub-optimal solutions. On the contrary, we propose an adaptive and cost-based approach. Our cost model takes into account both the processing cost and the communication cost. Furthermore, as a distributed stream processing system is vulnerable to inaccurate statistics, runtime fluctuations of data characteristics, server workloads, and network conditions, we have designed our scheme to be adaptive to these situations: an operational dissemination tree may be incrementally transformed to a more cost-effective one. Our adaptive strategy employs distributed decisions made by the distributed servers independently based on localized statistics collected by each server at runtime. For a relatively static environment, we also propose two static tree construction algorithms relying on apriori system statistics. These static trees can also be used as initial trees in a dynamic environment. We apply our schemes to both single- and multi-object dissemination. Our extensive performance study shows that the adaptive mechanisms are effective in a dynamic context and the proposed static tree construction algorithms perform close to optimal in a static environment.  相似文献   

4.
分析了分布式虚拟环境仿真的特点,提出了基于网格的分布式虚拟环境仿真的海量数据管理框架.该框架结构采用分层结构,自底向上依次为网格节点、高性能通信系统、数据存储与处理系统和计算系统.给出了一个基于上述体系结构的原型系统.对该原型系统的仿真结果表明,该海量数据管理体系结构设计符合虚拟环境仿真实时性、稳定性和高可靠性的要求.  相似文献   

5.
Grid computing enables users to perform computationally expensive applications on distributed resources acquired dynamically. Users are allowed to combine structured data and analysis components into new applications from distributed sites into new applications. Distributed query processing offers an established way of structuring such computations, and well-known tools like OGSA-DAI and OGSA-DQP provide respectively a common interface to heterogeneous databases, and a way of exploiting distributed resources. Such significant benefits are however often undermined by high communication costs due to the need to move data between distributed resources. This paper describes an approach that addresses this by dynamically deploying query processing engines, analysis services and databases within virtual machines, on an internet-scale, so as to reduce communication costs. Results of internet-scale experiments are presented to demonstrate the performance benefits. Further, the use of dynamic deployment features based on requirements allows the creation of an ad-hoc runtime engine and thus opens up the possibility of creating a virtual marketplace for software and hardware resources.  相似文献   

6.
The number of mobile agents and total execution time are two factors used to represent the system overhead that must be considered as part of mobile agent planning (MAP) for distributed information retrieval. In addition to these two factors, the time constraints at the nodes of an information repository must also be taken into account when attempting to improve the quality of information retrieval. In previous studies, MAP approaches could not consider dynamic network conditions, e.g., variable network bandwidth and disconnection, such as are found in peer-to-peer (P2P) computing. For better performance, mobile agents that are more sensitive to network conditions must be used. In this paper, we propose a new MAP approach that we have named Timed Mobile Agent Planning (Tmap). The proposed approach minimizes the number of mobile agents and total execution time while keeping the turnaround time to a minimum, even if some nodes have a time constraint. It also considers dynamic network conditions to reflect the dynamic network condition more accurately. Moreover, we incorporate a security and fault-tolerance mechanism into the planning approach to better adapt it to real network environments.  相似文献   

7.
Data processing complexity, partitionability, locality and provenance play a crucial role in the effectiveness of distributed data processing. Dynamics in data processing necessitates effective modeling which allows the understanding and reasoning of the fluidity of data processing. Through virtualization, resources have become scattered, heterogeneous, and dynamic in performance and networking. In this paper, we propose a new distributed data processing model based on automata where data processing is modeled as state transformations. This approach falls within a category of declarative concurrent paradigms which are fundamentally different than imperative approaches in that communication and function order are not explicitly modeled. This allows an abstraction of concurrency and thus suited for distributed systems. Automata give us a way to formally describe data processing independent from underlying processes while also providing routing information to route data based on its current state in a P2P fashion around networks of distributed processing nodes. Through an implementation, named Pumpkin, of the model we capture the automata schema and routing table into a data processing protocol and show how globally distributed resources can be brought together in a collaborative way to form a processing plane where data objects are self-routable on the plane.  相似文献   

8.
In many-task computing (MTC), applications such as scientific workflows or parameter sweeps communicate via intermediate files; application performance strongly depends on the file system in use. The state of the art uses runtime systems providing in-memory file storage that is designed for data locality: files are placed on those nodes that write or read them. With data locality, however, task distribution conflicts with data distribution, leading to application slowdown, and worse, to prohibitive storage imbalance. To overcome these limitations, we present MemFS, a fully symmetrical, in-memory runtime file system that stripes files across all compute nodes, based on a distributed hash function. Our cluster experiments with Montage and BLAST workflows, using up to 512 cores, show that MemFS has both better performance and better scalability than the state-of-the-art, locality-based file system, AMFS. Furthermore, our evaluation on a public commercial cloud validates our cluster results. On this platform MemFS shows excellent scalability up to 1024 cores and is able to saturate the 10G Ethernet bandwidth when running BLAST and Montage.  相似文献   

9.
《Knowledge》1999,12(7):381-389
Window management systems have been successful in assisting users in controlling multiple applications but this success has been achieved at the expense of an increased number of window management operations. Intelligent window management is proposed as a way of reducing the number of such management operations and a particular approach—the Vanishing Windows approach—is discussed in detail. In this approach the inactive windows gradually shrink and move to the side. Choice of the basic design parameters is discussed and the reasons given for implementation decisions. Knowledge about simple geometric reasoning is stored in the system. The system is evaluated against a traditional window management system and performance gains are demonstrated. User opinion on the system is positive. Overall, the users did not find the system intrusive or feel that they had a lack of control.  相似文献   

10.
11.
The ever growing amount of data generated and consumed on the move using portable devices gives rise to serious data management issues. The fact that each person may own and is likely to carry several such devices further aggravates this problem. On the other hand, new opportunities for achieving cleverer device interaction emerge due to the increased wireless networking capabilities of wearable and portable devices. This paper introduces OmniStore, a system that combines portable devices and infrastructure-based services to relieve the user from explicit and time consuming file management tasks. Our approach is to let devices communicate with each other as well as with a repository service, in an opportunistic and asynchronous fashion, to perform a variety of file movement and copying actions behind the scenes, which would have typically required considerable explicit user interaction.  相似文献   

12.
Modelling cloud data using an adaptive slicing approach   总被引:2,自引:0,他引:2  
In reverse engineering, the conventional surface modelling from point cloud data is time-consuming and requires expert modelling skills. One of the innovative modelling methods is to directly slice the point cloud along a direction and generate a layer-based model, which can be used directly for fabrication using rapid prototyping (RP) techniques. However, the main challenge is that the thickness of each layer must be carefully controlled so that each layer will yield the same shape error, which is within the given tolerance bound. In this paper, an adaptive slicing method for modelling point cloud data is presented. It seeks to generate a direct RP model with minimum number of layers based on a given shape error. The method employs an iterative approach to find the maximum allowable thickness for each layer. Issues including multiple loop segmentation in layers, profile curve generation, and data filtering, are discussed. The efficacy of the algorithm is demonstrated by case studies.  相似文献   

13.
一种改进排序匹配算法在DDM中的应用与实现   总被引:1,自引:0,他引:1  
数据分发管理功能是降低网络冗余数据的有效手段,它是实现HLA-RTI的关键技术。结合IEEE1516介绍了数据分发管理过滤机制以及传统的匹配方法,在分析排序算法匹配原理的基础上,给出了排序算法实现订购区域与公布区域的匹配策略,针对排序算法在区域数目较大时出现的运行时间长、存储空间占用大的弊端,提出了一种改进的排序算法。通过仿真实验表明改进后的排序算法在区域数目较大时所需的时间开销较少,并且在区域边长发生变化的情况下具有较好的平稳性。  相似文献   

14.
Scientific data analysis and visualization have become the key component for nowadays large scale simulations. Due to the rapidly increasing data volume and awkward I/O pattern among high structured files, known serial methods/tools cannot scale well and usually lead to poor performance over traditional architectures. In this paper, we propose a new framework: ParSA (parallel scientific data analysis) for high-throughput and scalable scientific analysis, with distributed file system. ParSA presents the optimization strategies for grouping and splitting logical units to utilize distributed I/O property of distributed file system, scheduling the distribution of block replicas to reduce network reading, as well as to maximize overlapping the data reading, processing, and transferring during computation. Besides, ParSA provides the similar interfaces as the NetCDF Operator (NCO), which is used in most of climate data diagnostic packages, making it easy to use this framework. We utilize ParSA to accelerate well-known analysis methods for climate models on Hadoop Distributed File System (HDFS). Experimental results demonstrate the high efficiency and scalability of ParSA, getting the maximum 1.3 GB/s throughput on a six nodes Hadoop cluster with five disks per node. Yet, it can only get 392 MB/s throughput on a RAID-6 storage node.  相似文献   

15.
16.
The average response time of tasks in a distributed system depends on the strategy by which workload is shared among the nodes of the system. A common approach to load sharing is to resort to some distributed algorithm that arranges for task transfer between nodes based on information on the system's state. In this paper, we depict a hybrid approach to adaptive load sharing which outperforms existing algorithms, and is especially effective in response to peaks of workload, under both heavy and light system load conditions. The strategy we propose is novel in that it relies on a fully distributed algorithm when the system is heavily loaded, but resorts to a centrally coordinated one when parts of the system become idle. The transition from one algorithm to the other is performed automatically, and the simplicity of the algorithms proposed makes it possible to use a centralized component without incurring in scalability problems and presenting instabilities. Both algorithms are very lightweight and do not need any tuning of parameters. Simulations show that the hybrid approach performs well under all load conditions and task generation patterns, it is weakly sensitive to processing overhead and communication delays, and scales well (to hundred of nodes) despite the use of a centralized component.  相似文献   

17.
The development of complex models can be greatly facilitated by the utilization of libraries of reusable model components. In this paper we describe an object-oriented module specification formalism (MSF) for implementing archivable modules in support of continuous spatial modeling. This declarative formalism provides the high level of abstraction necessary for maximum generality, provides enough detail to allow a dynamic simulation to be generated automatically, and avoids the “hard-coded” implementation of space-time dynamics that makes procedural specifications of limited usefulness for specifying archivable modules. A set of these MSF modules can be hierarchically linked to create a parsimonious model specification, or “parsi-model”. The parsi-model exists within the context of a modeling environment (an integrated set of software tools which provide the computer services necessary for simulation development and execution), which can offer simulation services that are not possible in a loosely-coupled “federated” environment, such as graphical module development and configuration, automatic differentiation of model equations, run-time visualization of the data and dynamics of any variable in the simulation, transparent distributed computing within each module, and fully configurable space-time representations. We believe this approach has great potential for bringing the power of modular model development into the collaborative simulation arena.  相似文献   

18.
As crowd simulation in micro-spatial environment is more widely applied in urban planning and management, the construction of an appropriate spatial data model that supports such applications becomes essential. To address the requirements necessary to building a model of crowd simulation and people–place relationship analysis in micro-spatial environments, the concept of the grid as a basic unit of people–place data association is presented in this article. Subsequently, a grid-based spatial data model is developed for modelling spatial data using Geographic Information System (GIS). The application of the model for crowd simulations in indoor and outdoor spatial environments is described. There are four advantages of this model: first, both the geometrical characteristics of geographic entities and behaviour characteristics of individuals within micro-spatial environments are involved; second, the object-oriented model and spatial topological relationships are fused; third, the integrated expression of indoor and outdoor environments can be realised; and fourth, crowd simulation models, such as Multi-agent System (MAS) and Cellular Automata (CA), can be further fused for intelligent simulation and the analysis of individual behaviours. Lastly, this article presents an experimental implementation of the data model, individual behaviours are simulated and analysed to illustrate the potential of the proposed model.  相似文献   

19.
An approach to adaptive control of fuzzy dynamic systems   总被引:2,自引:0,他引:2  
This paper discusses adaptive control for a class of fuzzy dynamic models. The adaptive control law is first designed in each local region and then constructed in the global domain. It is shown that the resulting fuzzy adaptive control system is globally stable. Robustness issues of the adaptive control system are also addressed. A simulation example is given for demonstration of the application of the approach  相似文献   

20.
对高层体系结构(HLA)中数据分发管理(DDM)的研究,主要目标是在符合HLA标准的前提下,提高数据的过滤效率,同时减少计算量,并提供较好的可扩展性适应于各种规模的分布式仿真应用。Lookahead是分布式模拟时间管理协议中的一个重要概念,各模拟实体使用Lookahead把自己产生事件的时间标记情况更早地通知给其它实体,以加快程序的运行。采用颜色Petri网和Lookahead的数据分发机制,能够很好地对数据进行过滤,提高数据的过滤效率,并能保证数据收发的成功率。仿真实验结果表明:基于颜色Petri网和Lookahead的数据分发机制是优于区域匹配方法的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号