On-line analytical processing (OLAP) typically involves complex aggregate queries over large datasets. The data cube has been
proposed as a structure that materializes the results of such queries in order to accelerate OLAP. A significant fraction
of the related work has been on Relational-OLAP (ROLAP) techniques, which are based on relational technology. Existing ROLAP
cubing solutions mainly focus on “flat” datasets, which do not include hierarchies in their dimensions. Nevertheless, as shown
in this paper, the nature of hierarchies introduces several complications into the entire lifecycle of a data cube including
the operations of construction, storage, indexing, query processing, and incremental maintenance. This fact renders existing
techniques essentially inapplicable in a significant number of real-world applications and mandates revisiting the entire
cube lifecycle under the new perspective. In order to overcome this problem, the CURE algorithm has been recently proposed
as an efficient mechanism to construct complete cubes over large datasets with arbitrary hierarchies and store them in a highly
compressed format, compatible with the relational model. In this paper, we study the remaining phases in the cube lifecycle
and introduce query-processing and incremental-maintenance algorithms for CURE cubes. These are significantly different from
earlier approaches, which have been proposed for flat cubes constructed by other techniques and are inadequate for CURE due
to its high compression rate and the presence of hierarchies. Our methods address issues such as cube indexing, query optimization,
and lazy update policies. Especially regarding updates, such lazy approaches are applied for the first time on cubes. We demonstrate
the effectiveness of CURE in all phases of the cube lifecycle through experiments on both real-world and synthetic datasets.
Among the experimental results, we distinguish those that have made CURE the first ROLAP technique to complete the construction
and usage of the cube of the highest-density dataset in the APB-1 benchmark (12 GB). CURE was in fact quite efficient on this,
showing great promise with respect to the potential of the technique overall. 相似文献
This paper reports a study of the combined effect of driver age and engine size on accident severity and at-fault risk of young riders of two-wheelers. Data from the national accident database of Greece are used to calculate accident severity and relative fault risk rates. The induced exposure technique is applied due to the lack of exposure data. A log-linear analysis is then used to examine first- and second-order effects within three-variable groups. Accident severity modelling revealed a significant second-order interaction between severity, driver age and two-wheeler engine size. On the contrary, no second-order effects were identified in fault risk modelling. Moreover, a significant effect of driver age on accident fault risk was identified. The effect of engine size was not significant. 相似文献
XML database systems emerge as a result of the acceptance of the XML data model. Recent works have followed the promising approach of building XML database management systems on underlying RDBMSs. Achieving query processing performance reduces to two questions: (i) How should the XML data be decomposed into data that are stored in the RDBMS? (ii) How should the XML query be translated into an efficient plan that sends one or more SQL queries to the underlying RDBMS and combines the data into the XML result? We provide a formal framework for XML Schema-driven decompositions, which encompasses the decompositions proposed in prior work and extends them with decompositions that employ denormalized tables and binary-coded XML fragments. We provide corresponding query processing algorithms that translate the XML query conditions into conditions on the relational tables and assemble the decomposed data into the XML query result. Our key performance focus is the response time for delivering the first results of a query. The most effective of the described decompositions have been implemented in XCacheDB, an XML DBMS built on top of a commercial RDBMS, which serves as our experimental basis. We present experiments and analysis that point to a class of decompositions, called inlined decompositions, that improve query performance for full results and first results, without significant increase in the size of the database.Received: 21 December 2001, Accepted: 1 July 2003, Published online: 23 June 2004Edited by: A. HalevyAndrey Balmin: Andrey Balmin has been supported by NSF IRI-9734548.Yannis Papakonstantinou: The authors built the XCacheDB system while on leave at Enosys Software, Inc., during 2000. 相似文献
Social media play an important role in political mobilization. Voluntary engagement can especially benefit from new opportunities for organizing collective action. Although research has explored the use of Twitter by decentralized individuals for this, there has been little emphasis on its use for community engagement and the provision of public goods. Even less is known about its role in the emergence and offline expansion of spontaneous self‐organized solidarity initiatives. This paper investigates how networked communication facilitates self‐organization and the development of ties in a network of volunteers in Greece. To examine whether initiative‐specific community feelings that can transcend online‐offlsine divides evolve in such hybrid networks, the analysis is complemented with individual‐level data drawn from a survey with the initiative's volunteers. 相似文献
Providing real-time and QoS support to stream processing applications running on top of large-scale overlays is challenging
due to the inherent heterogeneity and resource limitations of the nodes and the multiple QoS demands of the applications that
must concurrently be met. In this paper we propose an integrated adaptive component composition and load balancing mechanism
that (1) allows the composition of distributed stream processing applications on the fly across a large-scale system, while
satisfying their QoS demands and distributing the load fairly on the resources, and (2) adapts dynamically to changes in the
resource utilization or the QoS requirements of the applications. Our extensive experimental results using both simulations
as well as a prototype deployment illustrate the efficiency, performance and scalability of our approach.
Vana Kalogeraki (Corresponding author)Email:
Thomas Repantis
is a PhD candidate at the Computer Science and Engineering Department of the University of California, Riverside. His research
interests lie in the area of distributed systems, distributed stream processing systems, middleware, peer-to-peer systems,
pervasive and cluster computing. He holds an MSc from the University of California, Riverside and a Diploma from the University
of Patras, Greece, and has interned with IBM Research, Intel Research and Hewlett-Packard.
Yannis Drougas
is currently a Ph.D. student in the Department of Computer Science and Engineering at University of California, Riverside.
He received the Diploma in Electrical and Computer Engineering from Technical University of Crete, Greece in 2003. His research
interests include peer-to-peer systems, real-time systems, stream processing systems, resource management and sensor networks.
Vana Kalogeraki
is currently an Associate Professor in the Department of Computer Science and Engineering at the University of California,
Riverside. She received the Ph.D. in Electrical and Computer Engineering from the University of California, Santa Barbara,
in 2000. Previously she was an Assistant Professor in the Department of Computer Science and Engineering at the University
of California, Riverside (2002–2008) and held a Research Scientist Position at Hewlett Packard Labs in Palo Alto, CA (2001–2002).
Her research interests include distributed systems, peer-to-peer systems, real-time systems, resource management and sensor
networks.
相似文献
Similarity search in P2P systems has attracted a lot of attention recently and several important applications, like distributed image search, can profit from the proposed distributed algorithms. In this paper, we address the challenging problem of efficient processing of range queries in metric spaces, where data is horizontally distributed across a super-peer network. Our approach relies on SIMPEER (Doulkeridis et al. in Proceedings of VLDB, pp. 986–997, 2007), a framework that dynamically clusters peer data, in order to build distributed routing information at super-peer level. SIMPEER allows the evaluation of exact range and nearest neighbor queries in a distributed manner that reduces communication cost, network latency, bandwidth consumption and computational overhead at each individual peer. In this paper, we extend SIMPEER by focusing on efficient range query processing and providing recall-based guarantees for the quality of the result retrieved so far. This is especially useful for range queries that lead to result sets of high cardinality and incur high processing costs, while the complete result set becomes overwhelming for the user. Our framework employs statistics for estimating an upper limit of the number of possible results for a range query and each super-peer may decide not to propagate further the query and reduce the scope of the search. We provide an experimental evaluation of our framework and show that our approach performs efficiently, even in the case of high degree of distribution. 相似文献
Since the beginning of the Semantic Web initiative, significant efforts have been invested in finding efficient ways to publish, store, and query metadata on the Web. RDF and SPARQL have become the standard data model and query language, respectively, to describe resources on the Web. Large amounts of RDF data are now available either as stand-alone datasets or as metadata over semi-structured (typically XML) documents. The ability to apply RDF annotations over XML data emphasizes the need to represent and query data and metadata simultaneously. We propose XR, a novel hybrid data model capturing the structural aspects of XML data and the semantics of RDF, also enabling us to reason about XML data. Our model is general enough to describe pure XML or RDF datasets, as well as RDF-annotated XML data, where any XML node can act as a resource. This data model comes with the XRQ query language that combines features of both XQuery and SPARQL. To demonstrate the feasibility of this hybrid XML-RDF data management setting, and to validate its interest, we have developed an XR platform on top of well-known data management systems for XML and RDF. In particular, the platform features several XRQ query processing algorithms, whose performance is experimentally compared. 相似文献
Deep learning has catalysed progress in tasks such as face recognition and analysis, leading to a quick integration of technological solutions in multiple layers of our society. While such systems have proven to be accurate by standard evaluation metrics and benchmarks, a surge of work has recently exposed the demographic bias that such algorithms exhibit–highlighting that accuracy does not entail fairness. Clearly, deploying biased systems under real-world settings can have grave consequences for affected populations. Indeed, learning methods are prone to inheriting, or even amplifying the bias present in a training set, manifested by uneven representation across demographic groups. In facial datasets, this particularly relates to attributes such as skin tone, gender, and age. In this work, we address the problem of mitigating bias in facial datasets by data augmentation. We propose a multi-attribute framework that can successfully transfer complex, multi-scale facial patterns even if these belong to underrepresented groups in the training set. This is achieved by relaxing the rigid dependence on a single attribute label, and further introducing a tensor-based mixing structure that captures multiplicative interactions between attributes in a multilinear fashion. We evaluate our method with an extensive set of qualitative and quantitative experiments on several datasets, with rigorous comparisons to state-of-the-art methods. We find that the proposed framework can successfully mitigate dataset bias, as evinced by extensive evaluations on established diversity metrics, while significantly improving fairness metrics such as equality of opportunity.
In data models that have graph representations, users navigate following the links of the graph structure. Conducting data mining on collected information about user accesses in such models, involves the determination of frequently occurring access sequences. In this paper, the problem of finding traversal patterns from such collections is examined. The determination of patterns is based on the graph structure of the model. For this purpose, three algorithms, one which is level-wise with respect to the lengths of the patterns and two which are not are presented. Additionally, we consider the fact that accesses within patterns may be interleaved with random accesses due to navigational purposes. The definition of the pattern type generalizes existing ones in order to take into account this fact. The performance of all algorithms and their sensitivity to several parameters is examined experimentally. 相似文献