共查询到20条相似文献,搜索用时 15 毫秒
1.
为了在传感器网络中收集时间敏感性的数据,引入了移动设备来收集数据。提出了两种启发式算法,一种是基于货郎担问题的解法,将原问题分割成较小集合,然后逐步求解小问题,该算法适用于数据敏感性要求相对较低的应用;而当数据敏感性要求较高时,提出的贪婪式算法逐步建立移动设备的移动路径,即从基站(Sink)开始迭代选择代价值最小的节点,直到不能再添加节点进移动路径中。理论分析和模拟结果表明,提出的算法可以减少数据收集过程中所需要的移动设备的数目,而且大大节省了数据收集的总时间,从而可以应用在大规模网络中。 相似文献
2.
Sensor data fusion imposes a number of novel requirements on query languages and query processing techniques. A spatial/temporal query language called /spl Sigma/QL has been proposed to support the retrieval and fusion of multimedia information from multiple sources and databases. In this paper we investigate fusion techniques, multimedia data transformations and /spl Sigma/QL query processing techniques for sensor data fusion. Fusion techniques including fusion by the merge operation, the detection of moving objects, and the incorporation of belief values, have been developed. An experimental prototype has been implemented and tested to demonstrate the feasibility of these techniques. 相似文献
3.
Sensor data on traffic events have prompted a wide range of research issues, related with the so-called ITS (Intelligent Transportation
Systems). Data are delivered for both static (fixed) and mobile (embedded) sensors, generating large and complex spatio-temporal
series. This scenario presents several research challenges, in spatio-temporal data management and data analysis. Management
issues involve, for instance, data cleaning and data fusion to support queries at distinct spatial and temporal granularities.
Analysis issues include the characterization of traffic behavior for given space and/or time windows, and detection of anomalous
behavior (either due to sensor malfunction, or to traffic events). This paper contributes to the solution of some of these
issues through a new kind of framework to manage static sensor data. Our work is based on combining research on analytical
methods to process sensor data, and data management strategies to query these data. The first aspect is geared towards supporting
pattern matching. This leads to a model to study and predict unusual traffic behavior along an urban road network. The second
aspect deals with spatio-temporal database issues, taking into account information produced by the model. This allows distinct
granularities and modalities of analysis of sensor data in space and time. This work was conducted within a project that uses
real data, with tests conducted on 1,000 sensors, during 3 years, in a large French city. 相似文献
4.
The nature of many sensor applications as well as continuously changing sensor data often imposes real-time requirements on wireless sensor network protocols. Due to numerous design constraints, such as limited bandwidth, memory and energy of sensor platforms, and packet collisions that can potentially lead to an unbounded number of retransmissions, timeliness techniques designed for real-time systems and real-time databases cannot be applied directly to wireless sensor networks. Our objective is to design a protocol for sensor applications that require periodic collection of raw data reports from the entire network in a timely manner. We formulate the problem as a graph coloring problem. We then present TIGRA (Timely Sensor Data Collection using Distributed Graph Coloring) — a distributed heuristic for graph coloring that takes into account application semantics and special characteristics of sensor networks. TIGRA ensures that no interference occurs and spatial channel reuse is maximized by assigning a specific time slot for each node. Although the end-to-end delay incurred by sensor data collection largely depends on a specific topology, platform, and application, TIGRA provides a transmission schedule that guarantees a deterministic delay on sensor data collection. 相似文献
5.
A category of Distributed Real-Time Systems (DRTS) that has multiprocessor pipeline architecture is increasingly used. The
key challenge of such systems is to guarantee the end-to-end deadlines of aperiodic tasks. This paper proposes an end-to-end
deadline control model, called Linear Quadratic Stochastic Optimal Control Model (LQ-SOCM), which features a distributed feedback
control that dynamically enforces the desired performance. The control system considers the aperiodic task arrivals and execution
times’ variation as the two external factors of the system unpredictability. LQ-SOCM uses discrete time state space equation
to describe the real-time computing system. Then, in the actuator design, a continuous manner is adopted to deal with discrete
QoS (Quality of Service) adaptation. Finally, experiments demonstrate that the system is globally stable and can statistically
provide the end-to-end deadline guarantee for aperiodic tasks. At the same time, LQ-SOCM is capable of effectively improving
the system throughput.
相似文献
6.
Many distributed database applications need to replicate data to improve data availability and query response time. The two-phase
commit protocol guarantees mutual consistency of replicated data but does not provide good performance. Lazy replication has
been used as an alternative solution in several types of applications such as on-line financial transactions and telecommunication
systems. In this case, mutual consistency is relaxed and the concept of freshness is used to measure the deviation between
replica copies. In this paper, we propose two update propagation strategies that improve freshness. Both of them use immediate
propagation: updates to a primary copy are propagated towards a slave node as soon as they are detected at the master node
without waiting for the commitment of the update transaction. Our performance study shows that our strategies can improve
data freshness by up to five times compared with the deferred approach.
Received April 24, 1998 / Revised June 7, 1999 相似文献
7.
Active databases and real-time databases
have been important areas of research
in the recent past. It has been recognized
that many benefits can be gained by
integrating real-time and active database technologies.
However, not much work has been
done in the area of transaction processing in real-time active
databases. This paper deals with an important aspect
of transaction processing
in real-time active databases, namely the problem of
assigning priorities to
transactions. In these systems, time-constrained
transactions trigger other
transactions during their execution. We present three policies for assigning
priorities to parent, immediate and deferred transactions executing on a
multiprocessor system and then evaluate the policies through simulation. The
policies use different amounts of semantic information about transactions to
assign the priorities. The simulator has been validated against the results of
earlier published studies. We conducted experiments in three settings: a task
setting, a main memory database setting and a disk-resident database
setting.
Our results demonstrate that dynamically changing the priorities of
transactions, depending on their behavior (triggering rules), yields a
substantial improvement in the number of triggering transactions that meet
their deadline in all three settings.
Edited by Henry F. Korth and Amith Sheth.
Received November 1994 / Accepted March 20, 1995 相似文献
8.
Due to resource sharing among tasks, priority inversion can occur during priority-driven preemptive scheduling. In this work, we investigate solutions to the priority inversion problem in a soft real-time database environment where two-phhse locking is employed for concurrency control. We examine two basic schemes for addressing the priority inversion problem, one based on priority inheritance and the other based on priority abort. We also study a new scheme, called conditional priority inheritance, which attempts to capitalize on the advantages of each of the two basic schemes. In contrast with previous results obtained in real-time operating systems, our performance studies, conducted on an actual real-time database testbed, indicate that the basic priority inheritance protocol is inappropriate for solving the priority inversion problem in real-time database systems. We identify the reasons for this performance. We also show that the conditional priority inheritance scheme and the priority abort scheme perform well for a wide range of system workloads.This work was supported by the National Science Foundation under Grant IRI-8908693 and by the U.S. Office of Naval Research under Grant N00014-85-K0398.A previous version of this paper appeared in Real-Time Systems Symposium, Dec. 1991. 相似文献
9.
A temporal database contains time-varying data. In a real-time database transactions have deadlines or timing constraints. In this paper we review the substantial research in these two previously separate areas. First we characterize the time domain; then we investigate temporal and real-time data models. We evaluate temporal and real-time query languages along several dimensions. We examine temporal and real-time DBMS implementation. Finally, we summarize major research accomplishments to date and list several unanswered research questions 相似文献
11.
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation. 相似文献
12.
Data mining is an important real-life application for businesses. It is critical to find efficient ways of mining large data sets. In order to benefit from the experience with relational databases, a set-oriented approach to mining data is needed. In such an approach, the data mining operations are expressed in terms of relational or set-oriented operations. Query optimization technology can then be used for efficient processing. In this paper, we describe set-oriented algorithms for mining association rules. Such algorithms imply performing multiple joins and thus may appear to be inherently less efficient than special-purpose algorithms. We develop new algorithms that can be expressed as SQL queries, and discuss optimization of these algorithms. After analytical evaluation, an algorithm named SETM emerges as the algorithm of choice. Algorithm SETM uses only simple database primitives, viz., sorting and merge-scan join. Algorithm SETM is simple, fast, and stable over the range of parameter values. It is easily parallelized and we suggest several additional optimizations. The set-oriented nature of Algorithm SETM makes it possible to develop extensions easily and its performance makes it feasible to build interactive data mining tools for large databases. 相似文献
13.
In many distributed databases locality of reference is crucial to achieve acceptable performance. However, the purpose of data distribution is to spread the data among several remote sites. One way to solve this contradiction is to use partitioned data techniques. Instead of accessing the entire data, a site works on a fraction that is made locally available, thereby increasing the site's autonomy. We present a theory of partitioned data that formalizes the concept and establishes the basis to develop a correctness criterion and a concurrency control protocol for partitioned databases. Set-serializability is proposed as a correctness criterion and we suggest an implementation that integrates partitioned and non-partitioned data. To complete this study, the policies required in a real implementation are also analyzed.
Recommended by: Hector Garcia-Molina 相似文献
14.
The growth of the Internet of Things (IoTs) and the number of connected devices is driven by emerging applications and business models. One common aim is to provide systems able to synchronize these devices, handle the big amount of daily generated data and meet business demands. This paper proposes a cost-effective cloud-based architecture using an event-driven backbone to process many applications’ data in real-time, called REDA. It supports the Amazon Web Service (AWS) IoT core, and it opens the door as a free software-based implementation. Measured data from several wireless sensor nodes are transmitted to the cloud running application through the lightweight publisher/subscriber messaging transport protocol, MQTT. The real-time stream processing platform, Apache Kafka, is used as a message broker to receive data from the producer and forward it to the correspondent consumer. Micro-services design patterns, as an event consumer, are implemented with Java spring and managed with Apache Maven to avoid the monolithic applications’ problem. The Apache Kafka cluster co-located with Zookeeper is deployed over three availability zones and optimized for high throughput and low latency. To guarantee no message loss and to simulate the system performances, different load tests are carried out. The proposed architecture is reliable in stress cases and can handle records goes to 8000 messages in a second with low latency in a cheap hosted and configured architecture. 相似文献
15.
Personalization, advertising, and the sheer volume of online data generate a staggering amount of dynamic Web content. In addition to Web caching, view materialization has been shown to accelerate the generation of dynamic Web content. View materialization is an attractive solution as it decouples the serving of access requests from the handling of updates. In the context of the Web, selecting which views to materialize must be decided online and needs to consider both performance and data freshness, which we refer to as the online view selection problem. In this paper, we define data freshness metrics, provide an adaptive algorithm for the online view selection problem that is based on user-specified data freshness requirements, and present experimental results. Furthermore, we examine alternative metrics for data freshness and extend our proposed algorithm to handle multiple users and alternative definitions of data freshness.Received: 17 January 2004, Accepted: 23 March 2004, Published online: 19 August 2004Edited by: S. Abiteboul 相似文献
16.
Due to the explosive increases of data from both the cyber and physical worlds, the demand for database support in embedded systems is increasing. Databases for embedded systems, or embedded databases, are expected to provide timely in situ data services under various resource constraints, such as limited energy. However, traditional buffer cache management schemes, in which the primary goal is to minimize the number of I/O operations, is problematic since they do not consider the constraints of modern embedded devices such as limited energy and distinctive underlying storage. In particular, due to asymmetric read/write characteristics of flash memory-based storage of modern embedded devices, minimum buffer cache misses neither coincide with minimum power consumption nor minimum I/O deadline misses. In this paper we propose a novel power- and time-aware buffer cache management scheme for embedded databases. A novel multi-dimensional feedback control architecture is proposed and the characteristics of underlying storage of modern embedded devices is exploited for the simultaneous support of the desired I/O power consumption and the I/O deadline miss ratio. We have shown through an extensive simulation that our approach satisfies both power and timing requirements in I/O operations under a variety of workloads while consuming significantly smaller buffer space than baseline approaches. 相似文献
17.
The Extensible Markup Language, HTML's likely successor for capturing much Web content, is receiving a great deal of attention from the computing and Internet communities. Although the hype raises unrealistic expectations, XML does reduce the obstacles to sharing data among diverse applications and databases by providing a common format for expressing data structure and content. Although some benefits are already within reach, others will require new database technologies and vocabularies for affected application communities 相似文献
18.
In a mobile ad-hoc network (MANET), mobile hosts can move freely and communicate with each other directly through a wireless
medium without the existence of a fixed wired infrastructure. MANET is typically used in battlefields and disaster recovery
situations where it is not feasible to have a fixed network. Techniques that manage database transactions in MANET need to
address additional issues such as host mobility, energy limitation and real-time constraints. This paper proposes a solution
for transaction management that reduces the number of transactions missing deadlines while balancing the energy consumption
by the mobile hosts in the system. This paper then reports the simulation experiments that were conducted to evaluate the
performance of the proposed solution in terms of number of transactions missing deadlines, total energy consumption and the
distribution of energy consumption among mobile hosts.
Recommended by: Ahmed Elmagarmid
This work is partially supported by the National Science Foundation grants No. EIA-9973465 and IIS-0312746. 相似文献
19.
The explosion in complex multimedia content makes it crucial for database systems to support such data efficiently. This
paper argues that the “blackbox” ADTs used in current object-relational systems inhibit their performance, thereby limiting
their use in emerging applications. Instead, the next generation of object-relational database systems should be based on
enhanced abstract data type (E-ADT) technology. An (E-ADT) can expose the semantics of its methods to the database system, thereby permitting advanced query optimizations. Fundamental architectural changes
are required to build a database system with E-ADTs; the added functionality should not compromise the modularity of data
types and the extensibility of the type system. The implementation issues have been explored through the development of E-ADTs
in Predator. Initial performance results demonstrate an order of magnitude in performance improvements.
Received January 1, 1998 / Accepted May 27, 1998 相似文献
20.
针对光纤B ragg光栅(FBG)温度传感器实时监测过程中,对数据采集和处理的快速、准确的要求,提出了一种基于虚拟仪器技术的实时数据采集处理系统设计方案。该方案在首先考虑FBG温度传感器原理的基础上设计了数据采集处理系统,应用虚拟仪器软件LabVIEW语言编写程序,实现了对数据采集系统的控制与处理。对FBG温度传感器的实验测试表明:所设计的系统能够满足FBG传感器实时数据采集和处理的要求。 相似文献
|