首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In recent years, some scholars claimed the usage of intelligent products to make systems more efficient throughout the Product Life Cycle (PLC). Integrating intelligence and information into products themselves is possible with, among others, auto-ID technologies (barcode, RFID, …). In this paper, a new kind of intelligent product is introduced, referred to as “communicating material” paradigm. Through this paradigm, a product is (i) capable of embedding information on all or parts of the material that it is made of and (ii) capable of undergoing physical transformations without losing its communication ability and the data that is stored on it. This new material is used in our study to convey information between the different actors of the PLC, thus improving data interoperability, availability and sustainability. Although “communicating materials” provide new abilities compared to conventional products, they still have low memory capacities compared to product databases that become larger and larger. An information dissemination framework is developed in this paper to select the appropriate information to be stored on the product, at different stages of the PLC. This appropriateness is based on a degree of data relevance, which is computed by taking into account the context of use of the product (actor’s expectations, environment, …). This framework also provides the tools to split information on all or parts of the material. A case study is presented, which aims at embedding context-sensitive information on “communicating textiles”.  相似文献   

2.
针对物联网事件云的复杂事件处理面临的海量事件规模、分布式数据处理、上下文相关等挑战,提出一种分布式的上下文敏感复杂事件处理方法。该方法基于模糊本体进行事件上下文的表示和推理,通过查询重写支持事件上下文处理,并基于查询规划和数据划分进行分布式处理与启发式优化。实验结果表明,该方法能够处理模糊事件上下文,对于大规模物联网事件云上下文敏感复杂事件的处理具有比一般方法更好的性能和可伸缩性。  相似文献   

3.
In order to offer context-aware and personalized information, intelligent processing techniques are necessary. Different initiatives considering many contexts have been proposed, but users preferences need to be learned to offer contextualized and personalized services, products or information. Therefore, this paper proposes an agent-based architecture for context-aware and personalized event recommendation based on ontology and the spreading algorithm. The use of ontology allows to define the domain knowledge model, while the spreading activation algorithm learns user patterns by discovering user interests. The proposed agent-based architecture was validated with the modeling and implementation of eAgora? application, which was illustrated at the pervasive university context.  相似文献   

4.
Discrete event simulations are a powerful technique for modeling stochastic systems with multiple components where interactions between these components are governed by the probability distribution functions associated with them. Complex discrete event simulations are often computationally intensive with long completion times. This paper describes our solution to the problem of orchestrating the execution of a stochastic, discrete event simulation where computational hot spots evolve spatially over time. Our performance benchmarks report on our ability to balance computational loads in these settings. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Epidemic information dissemination in distributed systems   总被引:2,自引:0,他引:2  
Easy to deploy, robust, and highly resilient to failures, epidemic algorithms are a potentially effective mechanism for propagating information in large peer-to-peer systems deployed on Internet or ad hoc networks. It is possible to adjust the parameters of epidemic algorithm to achieve high reliability despite process crashes and disconnections, packet losses, and a dynamic network topology. Although researchers have used epidemic algorithms in applications such as failure detection, data aggregation, resource discovery and monitoring, and database replication, their general applicability to practical, Internet-wide systems remains open to question. We describe four key problems: membership maintenance, network awareness, buffer management, and message filtering, and suggest some preliminary approaches to address them.  相似文献   

6.
7.
Weixiong Rao  Lei Chen 《World Wide Web》2011,14(5-6):545-572
Recent years witnessed the explosive growth of ??live?? web content in the World Wide Web like Weblogs, RSS feeds, and real-time news, etc. The popular usage of RSS feeds/readers enables end users to subscribe for favorite contents via input RSS URLs. However, the RSS feeds/readers architecture suffers from (i) the high bandwidth consumption issue, and (ii) limited filtering semantics. In this paper, we proposed a stateful full text dissemination scheme over structured P2Ps to address both issues. Specifically, for the semantic side, end users are allowed to subscribe for favorite contents via input keywords; for the network bandwidth side, the cooperative content polling, filtering and disseminating via DHT-based P2P overlay networks save the network bandwidth consumption. Our contributions include the novel techniques to (i) reduce the unit-publishing cost by pruning irreverent documents during the forwarding path towards destinations, and (ii) reduce the publication amount by selecting a very small number of meaningful terms. Based on real data sets, our experimental results show that the proposed scheme can significantly reduce the publishing cost with low maintenance overhead and a high document quality.  相似文献   

8.
This paper investigates the problem of the real-time integration and processing of multimedia metadata collected by a distributed sensor network. The discussed practical problem is the efficiency of the technologies used in creating a Knowledge Base in real-time. Specifically, an approach is proposed for the real-time, rule-based semantic enrichment of lower level context features with higher-level semantics. The distinguishing characteristic is the provision of an intelligent middleware-based architecture on which low level components such as sensors, feature extraction algorithms, data sources, and high level components such as application-specific ontologies can be plugged. Throughout the paper, Priamos, a middleware architecture based on Semantic Web technologies is presented, together with a stress-test of the system’s operation under two test case scenarios: A smart security surveillance application and a smart meeting room application. Performance measurements are conducted and corresponding results are exposed.  相似文献   

9.
10.
In traditional co-clustering, the only basis for the clustering task is a given relationship matrix, describing the strengths of the relationships between pairs of elements in the different domains. Relying on this single input matrix, co-clustering discovers relationships holding among groups of elements from the two input domains. In many real life applications, on the other hand, other background knowledge or metadata about one or more of the two input domain dimensions may be available and, if leveraged properly, such metadata might play a significant role in the effectiveness of the co-clustering process. How additional metadata affects co-clustering, however, depends on how the process is modified to be context-aware. In this paper, we propose, compare, and evaluate three alternative strategies (metadata-driven, metadata-constrained, and metadata-injected co-clustering) for embedding available contextual knowledge into the co-clustering process. Experimental results show that it is possible to leverage the available metadata in discovering contextually-relevant co-clusters, without significant overheads in terms of information theoretical co-cluster quality or execution cost.  相似文献   

11.
Recently published algorithms for matching concurrent sets of events have the problem of unbounded message queue growth if events arrive in an undesirable order. This paper presents some algorithms that mitigate this problem by examining events waiting to be processed and removing those that cannot be part of a concurrent set  相似文献   

12.
Modeling collaboration processes is a challenging task. Existing modeling approaches are not capable of expressing the unpredictable, non-routine nature of human collaboration, which is influenced by the social context of involved collaborators. We propose a modeling approach which considers collaboration processes as the evolution of a network of collaborative documents along with a social network of collaborators. Our modeling approach, accompanied by a graphical notation and formalization, allows to capture the influence of complex social structures formed by collaborators, and therefore facilitates such activities as the discovery of socially coherent teams, social hubs, or unbiased experts. We demonstrate the applicability and expressiveness of our approach and notation, and discuss their strengths and weaknesses.  相似文献   

13.
针对车辆与车辆通信(V2VC)系统中由于车辆高移动性、动态网络拓扑和有限网络资源导致网络拥塞的问题,提出了一种上下文感知分布式信标调度方案.它基于空间上下文信息以类TDMA传输方式动态调度信标,在可预见的延迟和高可靠性下传输信标,基于信道负荷的局部测量和包括在信标中的上下文信息知识,每个节点以分布式方式在分配的时隙动态调度信标传输.仿真实验使用真实信道模型和IEEE 802.11p PHY/MAC模型内各种通信场景评估提出的信标调度方案,结果表明,方案的性能在数据包传输率和信道访问延迟方面优于周期性调度,并且可以满足安全应用的需求.  相似文献   

14.
This paper presents a rigorous analytic study of gossip-based message dissemination schemes that can be employed for content/service dissemination or discovery in unstructured and distributed networks. When using random gossiping, communication with multiple peers in one gossiping round is allowed. The algorithms studied in this paper are considered under different network conditions, depending on the knowledge of the state of the neighboring nodes in the network. Different node behaviors, with respect to their degree of cooperation and compliance with the gossiping process, are also incorporated. From the exact analysis, several important performance metrics and design parameters are analytically determined. Based on the proposed metrics and parameters, the performance of the gossip-based dissemination or search schemes, as well as the impact of the design parameters, are evaluated.  相似文献   

15.
Quantization is an approach to distributed logical simulation in which the value space is quantized and trajectories are represented by the crossings of a set of thresholds. This is an alternative to the common approach which discretizes the time base of a continuous trajectory to obtain a finite number of equally spaced sampled values over time. In distributed simulation, a quantizer checks for threshold crossings whenever an output event occurs and sends this value across to a receiver thereby reducing the number of messages exchanged among federates in a federation. This may increase performance in various ways such as decreasing overall execution time or allowing a larger number of entities to be simulated. Predictive quantization is a more advanced approach that sends just one bit of information instead of the actual real value size with the consequence that not only the number of messages, but also the message size, can be significantly reduced in this approach. In this paper, we present an approach to packaging individual bits into a large message packet, called multiplexed predictive quantization. We demonstrate that this approach can save significant overhead (thereby maximizing data transmission) and can reach close to 100% efficiency in the limit of large numbers of simultaneous message sources encapsulated within individual federates. We also discuss the tradeoff between message bandwidth utilization and the error incurred in the quantization. The results relate bandwidth utilization and error to quantum size for federations executing in the HLA-compliant discrete event distributed simulation environment, DEVS/HLA. The theoretical and empirical results indicate that quantization can be very scaleable due to reduced local computation demands as well as having extremely favorable network load reduction/simulation fidelity tradeoffs.  相似文献   

16.
We propose a novel algorithm for distributed processing applications constrained by the available communication resources using diffusion strategies that achieves up to a 103 fold reduction in the communication load over the network, while delivering a comparable performance with respect to the state of the art. After computation of local estimates, the information is diffused among the processing elements (or nodes) non-uniformly in time by conditioning the information transfer on level-crossings of the diffused parameter, resulting in a greatly reduced communication requirement. We provide the mean and mean-square stability analyses of our algorithms, and illustrate the gain in communication efficiency compared to other reduced-communication distributed estimation schemes.  相似文献   

17.
With the growing number of mega services and cloud computing platforms, industrial organizations are utilizing distributed data centers at increasing rates. Rather than the request/reply model, these centers use an event-based communication model. Traditionally, the event-based middleware and the Complex Event Processing (CEP) engine are viewed as two distinct components within a distributed system’s architecture. This division adds additional system complexity and reduces the ability for consuming applications to fully utilize the CEP toolset. This article will address these issues by proposing a novel event-based middleware solution. We introduce Complex Event Routing Infrastructure (CERI), a single event-based infrastructure that serves as an event bus and provides first class integration of CEP. An unstructured peer-to-peer network is exploited to allow for efficient event transmission. To reduce network flooding, superpeers and overlay network partitioning are introduced. Additionally, CERI provides each client node the capability of local complex query evaluation. As a result, applications can offload internal logic to the query evaluation engine in an efficient manner. Finally, as more client nodes and event types are added to the system, the CERI can scale up. Because of these favorable scaling properties, CERI serves as a foundational step in bringing event-based middleware and CEP closer together into a single unified infrastructure component.  相似文献   

18.
针对当前RFID(radio frequency identification)复合事件处理技术在性能和处理分布式应用方面的不足,提出了一种基于CORBA(分布对象请求代理体系结构)的分布式复合事件处理模型以及高效的基于查询规划和代价估算的分布式复合事件处理方法。实验结果表明,该方法在处理大规模的分布式RFID应用时是有效的。  相似文献   

19.
Context management systems are expected to administrate large volumes of spatial and non-spatial information in geographical disperse domains. In particular, when these systems cover wide areas such as cities, countries or even the entire planet, the design of scalable storage, retrieval and propagation mechanisms is paramount. This paper elaborates on mechanisms that address advanced requirements, including support for distributed context databases management; efficient query handling; innovative management of mobile physical objects and optimization strategies for distributed context data dissemination. These mechanisms establish a robust spatially-enhanced distributed context management framework that has already been designed and carefully implemented and thoroughly evaluated.  相似文献   

20.
In this paper we focus on Complex Event Processing (CEP) applications where the data is generated by sites that are geographically dispersed across large regions. This geographic distribution, combined with the size of the collected data, imposes severe communication and computation challenges. To attack these challenges, we propose a novel approach for geographically distributed CEP, which combines algorithmic and systems contributions. At an algorithmic level, our work combines an in-network processing approach, which pushes parts of the processing (i.e., CEP operators) towards the sources of their input events, along with a push–pull paradigm, in order to reduce the amount of communicated events. We present optimal (but computationally expensive) solutions which seek to minimize the maximum bandwidth consumption given input latency constraints for detecting events, as well as efficient greedy and heuristic algorithmic variations for our problem. At a systems level, we explain how existing CEP engines can support, with minimal modifications, our algorithms. Our experimental evaluation, using mainly real datasets and network topologies, demonstrates that the power of our techniques lies in the combination of the in-network with the push–pull paradigm, thus allowing our algorithms to significantly outperform related centralized push–pull or conventional in-network processing approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号