Subgraph querying has wide applications in various fields such as cheminformatics and bioinformatics. Given a query graph, q, a subgraph-querying algorithm retrieves all graphs, D(q), which have q as a subgraph, from a graph database, D. Subgraph querying is costly because it uses subgraph isomorphism tests, which are NP-complete. Graph indices are commonly used to improve the performance of subgraph querying in graph databases. Subgraph-querying algorithms first construct a candidate answer set by filtering out a set of false answers and then verify each candidate graph using subgraph isomorphism tests. To build graph indices, various kinds of substructure (subgraph, subtree, or path) features have been proposed with the goal of maximizing the filtering rate. Each of them works with a specifically designed index structure, for example, discriminative and frequent subgraph features work with gIndex, δ-TCFG features work with FG-index, etc. We propose Lindex, a graph index, which indexes subgraphs contained in database graphs. Nodes in Lindex represent key-value pairs where the key is a subgraph in a database and the value is a list of database graphs containing the key. We propose two heuristics that are used in the construction of Lindex that allows us to determine answers to subgraph queries conducting less subgraph isomorphism tests. Consequently, Lindex improves subgraph-querying efficiency. In addition, Lindex is compatible with any choice of features. Empirically, we demonstrate that Lindex used in conjunction with subgraph indexing features proposed in previous works outperforms other specifically designed index structures. As a novel index structure, Lindex (1) is effective in filtering false graphs (2) provides fast index lookups, (3) is fast with respect to index construction and maintenance, and (4) can be constructed using any set of substructure index features. These four properties result in a fast and scalable subgraph-querying infrastructure. We substantiate the benefits of Lindex and its disk-resident variation Lindex+ theoretically and empirically. 相似文献
Authors use images to present a wide variety of important information in documents. For example, two-dimensional (2-D) plots
display important data in scientific publications. Often, end-users seek to extract this data and convert it into a machine-processible
form so that the data can be analyzed automatically or compared with other existing data. Existing document data extraction
tools are semi-automatic and require users to provide metadata and interactively extract the data. In this paper, we describe
a system that extracts data from documents fully automatically, completely eliminating the need for human intervention. The
system uses a supervised learning-based algorithm to classify figures in digital documents into five classes: photographs,
2-D plots, 3-D plots, diagrams, and others. Then, an integrated algorithm is used to extract numerical data from data points
and lines in the 2-D plot images along with the axes and their labels, the data symbols in the figure’s legend and their associated
labels. We demonstrate that the proposed system and its component algorithms are effective via an empirical evaluation. Our
data extraction system has the potential to be a vital component in high volume digital libraries. 相似文献
Real-Time Systems - This paper presents results and observations from a survey of 120 industry practitioners in the field of real-time embedded systems. The survey provides insights into the... 相似文献
In this paper, we consider the fundamental problem of frequency estimation of multiple sinusoidal signals with stationary errors. We propose genetic algorithm and outlier-insensitive criterion function based technique for the frequency estimation problem. In the simulation studies and real life data analysis, it is observed that the proposed genetic algorithm based robust frequency estimators are able to resolve frequencies of the sinusoidal model with high degree of accuracy. Among the proposed methods, the genetic algorithm based least squares estimator, in the no-outlier scenario, provides efficient estimates, in the sense that their mean square errors attain the corresponding Cramér-Rao lower bounds. In the presence of outliers, the proposed robust methods perform quite well and seem to have a fairly high breakdown point with respect to level of outlier contamination. The proposed methods significantly do not depend on the initial guess values required for other iterative frequency estimation methods. 相似文献
In early or preparatory design stages, an architect or designer sketches out rough ideas, not only about the object or structure being considered, but its relation to its spatial context. This is an iterative process, where the sketches are not only the primary means for testing and refining ideas, but also for communicating among a design team and to clients. Hence, sketching is the preferred media for artists and designers during the early stages of design, albeit with a major drawback: sketches are 2D and effects such as view perturbations or object movement are not supported, thereby inhibiting the design process. We present an interactive system that allows for the creation of a 3D abstraction of a designed space, built primarily by sketching in 2D within the context of an anchoring design or photograph. The system is progressive in the sense that the interpretations are refined as the user continues sketching. As a key technical enabler, we reformulate the sketch interpretation process as a selection optimization from a set of context‐generated canvas planes in order to retrieve a regular arrangement of planes. We demonstrate our system (available at http:/geometry.cs.ucl.ac.uk/projects/2016/smartcanvas/ ) with a wide range of sketches and design studies. 相似文献
Easy-to-use audio/video authoring tools play a crucial role in moving multimedia software from research curiosity to mainstream
applications. However, research in multimedia authoring systems has rarely been documented in the literature. This paper describes
the design and implementation of an interactive video authoring system called Zodiac, which employs an innovative edit history abstraction to support several unique editing features not found in existing commercial
and research video editing systems. Zodiac provides users a conceptually clean and semantically powerful branching history model of edit operations to organize the authoring process, and to navigate among versions of authored documents. In addition,
by analyzing the edit history, Zodiac is able to reliably detect a composed video stream's shot and scene boundaries, which facilitates interactive video browsing.
Zodiac also features a video object annotation capability that allows users to associate annotations to moving objects in a video sequence. The annotations themselves could
be text, image, audio, or video. Zodiac is built on top of MMFS, a file system specifically designed for interactive multimedia development environments, and implements an internal buffer
manager that supports transparent lossless compression/decompression. Shot/scene detection, video object annotation, and buffer
management all exploit the edit history information for performance optimization. 相似文献
As biometric authentication systems become more prevalent, it is becoming increasingly important to evaluate their performance. This paper introduces a novel statistical method of performance evaluation for these systems. Given a database of authentication results from an existing system, the method uses a hierarchical random effects model, along with Bayesian inference techniques yielding posterior predictive distributions, to predict performance in terms of error rates using various explanatory variables. By incorporating explanatory variables as well as random effects, the method allows for prediction of error rates when the authentication system is applied to potentially larger and/or different groups of subjects than those originally documented in the database. We also extend the model to allow for prediction of the probability of a false alarm on a "watch-list" as a function of the list size. We consider application of our methodology to three different face authentication systems: a filter-based system, a Gaussian mixture model (GMM)-based system, and a system based on frequency domain representation of facial asymmetry 相似文献
OBJECTIVES: The objectives were to measure the impact of specific features of imaging devices on tasks relevant to minimally invasive surgery (MIS) and to investigate cognitive and perceptual factors in such tasks. BACKGROUND: Although image-guided interventions used in MIS provide benefits for patients, they pose drawbacks for surgeons, including degraded depth perception and reduced field of view (FOV). It is important to identify design factors that affect performance. METHOD: In two navigation experiments, observers fed a borescope through an object until it reached a target. Task completion time and object shape judgments were measured. In a motion perception experiment, observers reported the direction of a line that moved behind an aperture. A motion illusion associated with reduced FOV was measured. RESULTS: Navigation through an object was faster when a preview of the object's exterior was provided. Judgments about the object's shape were more accurate with a preview (compared with none) and with active viewing (compared with passive viewing). The motion illusion decreased with a rectangular or rotating octagonal viewing aperture (compared with circular). CONCLUSIONS: Navigation performance may be enhanced when surgeons develop a mental model of the surgical environment, when surgeons (rather than assistants) control the camera, and when the shape of the image is designed to reduce visual illusions. APPLICATION: Unintentional contact between surgical tools and healthy tissues may be reduced during MIS when (a) visual aids permit surgeons to maintain a mental model of the surgical environment, (b) images are bound by noncircular apertures, and (c) surgeons manually control the camera. 相似文献
In this paper, we propose elitist genetic algorithms–based artificial neural network (ANN) model for setting up an early warning system for occurrence of high inflation. The proposed warning system uses values of an appropriate set of economic fundamental variables as input and builds an ANN model for quantifying the possibility of high inflation within a fixed period of time window. Elitism-based generational genetic algorithm is used for optimizing the architecture of the ANN model. We empirically evaluate the proposed neuro-genetic approach to identify the class of leading economic indicators and build an early warning signalling system of an occurrence of high inflation (overall and component inflations) using the data from the Indian economy. We further compare the results of the proposed approach with the commonly used data-driven signals approach. In the empirical studies, we observe promising performance of the proposed neuro-genetic warning system, which is capable of generating accurate early warning signals of an impending high inflation.