首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With the rise of moving object databases it is possible to store and process spatial and temporal data, for example geometrical structures together with the information about how these behave over intervals of time. For simple objects like moving points the spatiotemporal development is derived from the start and end position in space and time, which is then linearly interpolated. For moving regions, especially with changing shapes, it is more challenging to obtain the necessary data to represent them. An elegant and intuitive solution is to create an algorithm, which automatically interpolates the moving region from the start and end shape over a specified time interval. Two papers on this topic have been published in the past, each focussing on different aspects of this so-called Region Interpolation Problem. This paper tries to combine the advantages and improve these approaches to provide high-quality interpolations while maintaining robustness even in border cases. This results in the implementation of a library, which can be easily integrated into existing moving objects database systems, as for example the DBMS Secondo developed at the FernUniversität in Hagen.  相似文献   

2.
The importance of reporting is ever increasing in today’s fast-paced market environments and the availability of up-to-date information for reporting has become indispensable. Current reporting systems are separated from the online transaction processing systems (OLTP) with periodic updates pushed in. A pre-defined and aggregated subset of the OLTP data, however, does not provide the flexibility, detail, and timeliness needed for today’s operational reporting. As technology advances, this separation has to be re-evaluated and means to study and evaluate new trends in data storage management have to be provided. This article proposes a benchmark for combined OLTP and operational reporting, providing means to evaluate the performance of enterprise data management systems for mixed workloads of OLTP and operational reporting queries. Such systems offer up-to-date information and the flexibility of the entire data set for reporting. We describe how the benchmark provokes the conflicts that are the reason for separating the two workloads on different systems. In this article, we introduce the concepts, logical data schema, transactions and queries of the benchmark, which are entirely based on the original data sets and real workloads of existing, globally operating enterprises.  相似文献   

3.
To evaluate the performance of database applications and database management systems (DBMSs), we usually execute workloads of queries on generated databases of different sizes and then benchmark various measures such as respond time and throughput. This paper introduces MyBenchmark, a parallel data generation tool that takes a set of queries as input and generates database instances. Users of MyBenchmark can control the characteristics of the generated data as well as the characteristics of the resulting workload. Applications of MyBenchmark include DBMS testing, database application testing, and application-driven benchmarking. In this paper, we present the architecture and the implementation algorithms of MyBenchmark. Experimental results show that MyBenchmark is able to generate workload-aware databases for a variety of workloads including query workloads extracted from TPC-C, TPC-E, TPC-H, and TPC-W benchmarks.  相似文献   

4.
An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases.  相似文献   

5.
Traditional approaches to the measurement of performance for CAD algorithms involve the use of sets of so-called “benchmark circuits.” In this paper, we demonstrate that current procedures do not produce results which accurately characterize the behavior of the algorithms under study. Indeed, we show that the apparent advances in algorithms which are documented by traditional benchmarking may well be due to chance, and not due to any new properties of the algorithms. As an alternative, we introduce a new methodology for the characterization of CAD heuristics which employs well-studied design of experiments methods. We show through numerous examples how such methods can be applied to evaluate the behavior of heuristics used in BDD variable ordering. Published online: 15 May 2001  相似文献   

6.
An important issue in text mining is how to make use of multiple pieces knowledge discovered to improve future decisions. In this paper, we propose a new approach to combining multiple sets of rules for text categorization using Dempster’s rule of combination. We develop a boosting-like technique for generating multiple sets of rules based on rough set theory and model classification decisions from multiple sets of rules as pieces of evidence which can be combined by Dempster’s rule of combination. We apply these methods to 10 of the 20-newsgroups—a benchmark data collection (Baker and McCallum 1998), individually and in combination. Our experimental results show that the performance of the best combination of the multiple sets of rules on the 10 groups of the benchmark data is statistically significant and better than that of the best single set of rules. The comparative analysis between the Dempster–Shafer and the majority voting (MV) methods along with an overfitting study confirm the advantage and the robustness of our approach.  相似文献   

7.
8.
This article deals with query processing techniques for the SQLf language which is an extended version of SQL supporting imprecise queries interpreted in the framework of fuzzy sets. SQLf, as well as SQL, allows for the use of nested queries, in which a (fuzzy) condition involved in a select block, calls on another select block (the nested one). Two types of processing strategies for nested queries are discussed. The first one tends to take advantage of existing database management systems (DBMS) to process fuzzy queries thanks to an additional layer which is in charge of translating the initial query into a Boolean one. In this perspective, the performances obtained depend strongly on the efficiency of the underlying DBMS. The other strategy is slightly different and it is situated in the context of the design of systems involving specific algorithms for processing fuzzy queries. In this article, the focus is put on algorithms related to the generic nesting construct “exists.” © 1996 John Wiley & Sons, Inc.  相似文献   

9.
Finding the nearest k objects to a query object is a fundamental operation for many data mining algorithms. With the recent interest in privacy, it is not surprising that there is strong interest in k-NN queries to enable clustering, classification and outlier-detection tasks. However, previous approaches to privacy-preserving k-NN have been costly and can only be realistically applied to small data sets. In this paper, we provide efficient solutions for k-NN queries for vertically partitioned data. We provide the first solution for the L (or Chessboard) metric as well as detailed privacy-preserving computation of all other Minkowski metrics. We enable privacy-preserving L by providing a practical approach to the Yao’s millionaires problem with more than two parties. This is based on a pragmatic and implementable solution to Yao’s millionaires problem with shares. We also provide privacy-preserving algorithms for combinations of local metrics into a global metric that handles the large dimensionality and diversity of attributes common in vertically partitioned data. To manage very large data sets, we provide a privacy-preserving SASH (a very successful data structure for associative queries in high dimensions). Besides providing a theoretical analysis, we illustrate the efficiency of our approach with an empirical evaluation.  相似文献   

10.
Modern database applications are increasingly employing database management systems (DBMS) to store multimedia and other complex data. To adequately support the queries required to retrieve these kinds of data, the DBMS need to answer similarity queries. However, the standard structured query language (SQL) does not provide effective support for such queries. This paper proposes an extension to SQL that seamlessly integrates syntactical constructions to express similarity predicates to the existing SQL syntax and describes the implementation of a similarity retrieval engine that allows posing similarity queries using the language extension in a relational DBMS. The engine allows the evaluation of every aspect of the proposed extension, including the data definition language and data manipulation language statements, and employs metric access methods to accelerate the queries. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
Integrating K-means clustering with a relational DBMS using SQL   总被引:1,自引:0,他引:1  
Integrating data mining algorithms with a relational DBMS is an important problem for database programmers. We introduce three SQL implementations of the popular K-means clustering algorithm to integrate it with a relational DBMS: 1) a straightforward translation of K-means computations into SQL, 2) an optimized version based on improved data organization, efficient indexing, sufficient statistics, and rewritten queries, and 3) an incremental version that uses the optimized version as a building block with fast convergence and automated reseeding. We experimentally show the proposed K-means implementations work correctly and can cluster large data sets. We identify which K-means computations are more critical for performance. The optimized and incremental K-means implementations exhibit linear scalability. We compare K-means implementations in SQL and C++ with respect to speed and scalability and we also study the time to export data sets outside of the DBMS. Experiments show that SQL overhead is significant for small data sets, but relatively low for large data sets, whereas export times become a bottleneck for C++.  相似文献   

12.
The spatio-temporal database research community has just started to investigate benchmarking issues. On one hand we would rather have a benchmark that is representative of real world applications, in order to verify the expressiveness of proposed models. On the other hand, we would like a benchmark that offers a sizeable workload of data and query sets, which could obviously stress the strengths and weaknesses of a broad range of data access methods. This paper offers a framework for a spatio-temporal data sets generator, a first step towards a full benchmark for the large real world application field of smoothly moving objects with few or no restrictions in motion. The driving application is the modeling of fishing ships where the ships go in the direction of the most attractive shoals of fish while trying to avoid storm areas. Shoals are themselves attracted by plankton areas. Ships are moving points; plankton or storm areas are regions with fixed center but moving shape; and shoals are moving regions. The specification is written in such a way that the users can easily adjust generation model parameters.  相似文献   

13.
In the InfoBeacons system, a peer-to-peer network of beacons cooperates to route queries to the best information sources. Many internet sources are unwilling to provide more cooperation than simple searching to aid in the query routing.We adapt techniques from information retrieval to deal with this lack of cooperation. In particular, beacons determine how to route queries based on information cached from sources’ responses to queries. In this paper, we examine alternative architectures for routing queries between beacons and to data sources. We also examine how to improve the routing by probing sources in an informed way to learn about their content. Results of experiments using a beacon network to search 2,500 information sources demonstrates the effectiveness of our system; for example, our techniques require contacting up to 71 percent fewer sources than existing peer-to-peer random walk techniques.  相似文献   

14.
We present a model problem for benchmarking codes that investigate magma migration in the Earth’s interior. This system retains the essential features of more sophisticated models, yet has the advantage of possessing solitary wave solutions. The existence of such exact solutions to the nonlinear problem make it an excellent benchmark problem for combinations of solver algorithms. In this work, we explore a novel algorithm for computing high quality approximations of the solitary waves in 1-, 2- and 3 dimensions and use them to benchmark a semi-Lagrangian Crank-Nicolson scheme for a finite element discretization of the time dependent problem.  相似文献   

15.
Benchmarking quality measurement   总被引:2,自引:1,他引:1  
This paper gives a simple benchmarking procedure for companies wishing to develop measures for software quality attributes of software artefacts. The procedure does not require that a proposed measure is a consistent measure of a quality attribute. It requires only that the measure shows agreement most of the time. The procedure provides summary statistics for measures of quality attributes of a software artefact. These statistics can be used to benchmark subjective direct measurement of a quality attribute by a company’s software developers. Each proposed measure is expressed as a set of error rates for measurement on an ordinal scale and these error rates enable simple benchmarking statistics to be derived. The statistics can also be derived for any proposed objective indirect measure or prediction system for the quality attribute. For an objective measure or prediction system to be of value to the company it must be ‘better’ or ‘more objective’ than the organisation’s current measurement or prediction capability; and thus confidence that the benchmark’s objectivity has been surpassed must be demonstrated. By using Bayesian statistical inference, the paper shows how to decide whether a new measure should be considered ‘more objective’ or whether a prediction system’s predictive capability can be considered ‘better’ than the current benchmark. Furthermore, the Bayesian inferential approach is easy to use and provides clear advantages for quantifying and inferring differences in objectivity.
John MosesEmail:
  相似文献   

16.
Indexing moving objects (MO) is a hot topic in the field of moving objects databases since many years. An impressive number of access methods have been proposed to optimize the processing of MO-related queries. Several methods have focused on spatio-temporal range queries, which represent the foundation of MO trajectory queries. Surprisingly, only a few of them consider that the objects movements are constrained. This is an important aspect for several reasons ranging from better capturing the relationship between the trajectory and the network space to more accurate trajectory representation with lower storage requirements. In this paper, we propose T-PARINET, an access method to efficiently retrieve the trajectories of objects moving in networks. T-PARINET is designed for continuous indexing of trajectory data flows. The cornerstone of T-PARINET is PARINET, an efficient index for historical trajectory data. The structure of PARINET is based on a combination of graph partitioning and a set of composite B+-tree local indexes. Because the network can be modeled using graphs, the partitioning of the trajectory data makes use of graph partitioning theory and can be tuned for a given query load and a given data distribution in the network space. The tuning process is built on a good quality cost model that is supplied with PARINET. The advantage of having a cost model is twofold; it allows a better integration of the index into the query optimizer of any DBMS, and it permits tuning the index structure for better performance. The tuning process can be performed before the index creation in the case of historical data or online in the case of indexing data flows. In fact, massive online updates can degrade the index quality, which can be measured by the cost model. We propose a specific maintenance process that results into T-PARINET. We study different types of queries and provide an optimized configuration for several scenarios. T-PARINET can easily be integrated into any RDBMS, which is an essential asset particularly for industrial or commercial applications. The experimental evaluation under an off-the-shelf DBMS shows that our method is robust. It also significantly outperforms the reference R-tree-based access methods for in-network trajectory databases.  相似文献   

17.
Graph conductance queries, also known as personalized PageRank and related to random walks with restarts, were originally proposed to assign a hyperlink-based prestige score to Web pages. More general forms of such queries are also very useful for ranking in entity-relation (ER) graphs used to represent relational, XML and hypertext data. Evaluation of PageRank usually involves a global eigen computation. If the graph is even moderately large, interactive response times may not be possible. Recently, the need for interactive PageRank evaluation has increased. The graph may be fully known only when the query is submitted. Browsing actions of the user may change some inputs to the PageRank computation dynamically. In this paper, we describe a system that analyzes query workloads and the ER graph, invests in limited offline indexing, and exploits those indices to achieve essentially constant-time query processing, even as the graph size scales. Our techniques—data and query statistics collection, index selection and materialization, and query-time index exploitation—have parallels in the extensive relational query optimization literature, but is applied to supporting novel graph data repositories. We report on experiments with five temporal snapshots of the CiteSeer ER graph having 74–702 thousand entity nodes, 0.17–1.16 million word nodes, 0.29–3.26 million edges between entities, and 3.29–32.8 million edges between words and entities. We also used two million actual queries from CiteSeer’s logs. Queries run 3–4 orders of magnitude faster than whole-graph PageRank, the gap growing with graph size. Index size is smaller than a text index. Ranking accuracy is 94–98% with reference to whole-graph PageRank.  相似文献   

18.
Compressed Data Cube for Approximate OLAP Query Processing   总被引:4,自引:0,他引:4       下载免费PDF全文
Approximate query processing has emerged as an approach to dealing with the huge data volume and complex queries in the environment of data warehouse.In this paper,we present a novel method that provides approximate answers to OLAP queries.Our method is based on building a compressed (approximate) data cube by a clustering technique and using this compressed data cube to provide answers to queries directly,so it improves the performance of the queries.We also provide the algorithm of the OLAP queries and the confidence intervals of query results.An extensive experimental study with the OLAP council benchmark shows the effectiveness and scalability of our cluster-based approach compared to sampling.  相似文献   

19.
RFID middleware collects and filters RFID streaming data to process applications' requests called continuous queries, because they are executed continuously during tag movement. Several approaches to building an index on queries rather than data records, called a query index, have been proposed to evaluate continuous queries over streaming data. EPCglobal proposed an Event Cycle Specification (ECSpec) model, which is a de facto standard query interface for RFID applications. Continuous queries based on ECSpec consist of a large number of segments that represent the query conditions. The problem when using any of the existing query indexes on these continuous queries is that it takes a long time to build the index, because it is necessary to insert a large number of segments into the index. To solve this problem, we propose a transform method that converts a group of segments into compressed data. We also propose an efficient query index scheme for the transformed space. Comparing with existing query indexes, the performance of proposed index outperforms the others on various datasets.  相似文献   

20.
近年来,能效数据库系统成为数据库领域的一个研究议题.CPU动态电压频率调节(DVFS)是一种有效的动态功率节能技术.探寻PostgreSQL数据库在ACPI不同调节器下查询操作的性能、能耗、功率之间潜在联系,发现动态功耗管理与数据库系统的能效关系,通过运行TPC-H测试基准生成的数据库与相应22个查询,总结出调节器对数据库查询处理各种操作的影响.实验结果表明,DVFS可以对DBMS进行动态功耗管理是有效的,查询处理的不同操作具有各自特性,利用这些特性来设计效率更高的调节器是颇有前途的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号