首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Distance automata are automata weighted over the semiring \((\mathbb {N}\cup \{\infty \},\min , +)\) (the tropical semiring). Such automata compute functions from words to \(\mathbb {N}\cup \{\infty \}\). It is known from Krob that the problems of deciding ‘ fg’ or ‘ f=g’ for f and g computed by distance automata is an undecidable problem. The main contribution of this paper is to show that an approximation of this problem is decidable. We present an algorithm which, given ε>0 and two functions f,g computed by distance automata, answers “yes” if f≤(1?ε)g, “no” if f≦?g, and may answer “yes” or “no” in all other cases. The core argument behind this quasi-decision procedure is an algorithm which is able to provide an approximated finite presentation of the closure under products of sets of matrices over the tropical semiring. Lastly, our theorem of affine domination gives better bounds on the precision of known decision procedures for cost automata, when restricted to distance automata.  相似文献   

2.
The development of formal methods for control design is an important challenge with potential applications in a wide range of safety-critical cyber-physical systems. Focusing on switched dynamical systems, we propose a new abstraction, based on time-varying regions of invariance (control funnels), that models behaviors of systems as timed automata. The main advantage of this method is that it allows for the automated verification and reactive controller synthesis without discretizing the evolution of the state of the system. Efficient and analytic constructions are possible in the case of linear dynamics whereas bounding funnels with conjectured properties based on numerical simulations can be used for general nonlinear dynamics. We demonstrate the potential of our approach with three examples.  相似文献   

3.
The problem of kNN (k Nearest Neighbor) queries has received considerable attention in the database and information retrieval communities. Given a dataset D and a kNN query q, the k nearest neighbor algorithm finds the closest k data points to q. The applications of kNN queries are board, not only in spatio-temporal databases but also in many areas. For example, they can be used in multimedia databases, data mining, scientific databases and video retrieval. The past studies of kNN query processing did not consider the case that the server may receive multiple kNN queries at one time. Their algorithms process queries independently. Thus, the server will be busy with continuously reaccessing the database to obtain the data that have already been acquired. This results in wasting I/O costs and degrading the performance of the whole system. In this paper, we focus on this problem and propose an algorithm named COrrelated kNN query Evaluation (COKE). The main idea of COKE is an “information sharing” strategy whereby the server reuses the query results of previously executed queries for efficiently processing subsequent queries. We conduct a comprehensive set of experiments to analyze the performance of COKE and compare it with the Best-First Search (BFS) algorithm. Empirical studies indicate that COKE outperforms BFS, and achieves lower I/O costs and less running time.  相似文献   

4.
Flexible integration of multimedia sub-queries with qualitative preferences   总被引:1,自引:0,他引:1  
Complex multimedia queries, aiming to retrieve from large databases those objects that best match the query specification, are usually processed by splitting them into a set of m simpler sub-queries, each dealing with only some of the query features. To determine which are the overall best-matching objects, a rule is then needed to integrate the results of such sub-queries, i.e., how to globally rank the m-dimensional vectors of matching degrees, or partial scores, that objects obtain on the m sub-queries. It is a fact that state-of-the-art approaches all adopt as integration rule a scoring function, such as weighted average, that aggregates the m partial scores into an overall (numerical) similarity score, so that objects can be linearly ordered and only the highest scored ones returned to the user. This choice however forces the system to compromise between the different sub-queries and can easily lead to miss relevant results. In this paper we explore the potentialities of a more general approach, based on the use of qualitative preferences, able to define arbitrary partial (rather than only linear) orders on database objects, so that a larger flexibility is gained in shaping what the user is looking for. For the purpose of efficient evaluation, we propose two integration algorithms able to work with any (monotone) partial order (thus also with scoring functions): MPO, which delivers objects one layer of the partial order at a time, and iMPO, which can incrementally return one object at a time, thus also suitable for processing top k queries. Our analysis demonstrates that using qualitative preferences pays off. In particular, using Skyline and Region-prioritized Skyline preferences for queries on a real image database, we show that the results we get have a precision comparable to that obtainable using scoring functions, yet they are obtained much faster, saving up to about 70% database accesses.  相似文献   

5.
Formal synthesis approaches over stochastic systems have received significant attention in the past few years, in view of their ability to provide provably correct controllers for complex logical specifications in an automated fashion. Examples of complex specifications include properties expressed as formulae in linear temporal logic (LTL) or as automata on infinite strings. A general methodology to synthesize controllers for such properties resorts to symbolic models of the given stochastic systems. Symbolic models are finite abstractions of the given concrete systems with the property that a controller designed on the abstraction can be refined (or implemented) into a controller on the original system. Although the recent development of techniques for the construction of symbolic models has been quite encouraging, the general goal of formal synthesis over stochastic control systems is by no means solved. A fundamental issue with the existing techniques is the known “curse of dimensionality,” which is due to the need to discretize state and input sets. Such discretization generally results in an exponential complexity over the number of state and input variables in the concrete system. In this work we propose a novel abstraction technique for incrementally stable stochastic control systems, which does not require state-space discretization but only input set discretization, and that can be potentially more efficient (and thus scalable) than existing approaches. We elucidate the effectiveness of the proposed approach by synthesizing a schedule for the coordination of two traffic lights under some safety and fairness requirements for a road traffic model. Further we argue that this 5-dimensional linear stochastic control system cannot be studied with existing approaches based on state-space discretization due to the very large number of generated discrete states.  相似文献   

6.
Similarity search in graph databases has been widely investigated. It is worthwhile to develop a fast algorithm to support similarity search in large-scale graph databases. In this paper, we investigate a k-NN (k-Nearest Neighbor) similarity search problem by locality sensitive hashing (LSH). We propose an innovative fast graph search algorithm named LSH-GSS, which first transforms complex graphs into vectorial representations based on prototypes in the database and later accelerates a query in Euclidean space by employing LSH. Because images can be represented as attributed graphs, we propose an approach to transform attributed graphs into n-dimensional vectors and apply LSH-GSS to execute further image retrieval. Experiments on three real graph datasets and two image datasets show that our methods are highly accurate and efficient.  相似文献   

7.
Efficient and effective processing of the distance-based join query (DJQ) is of great importance in spatial databases due to the wide area of applications that may address such queries (mapping, urban planning, transportation planning, resource management, etc.). The most representative and studied DJQs are the K Closest Pairs Query (KCPQ) and εDistance Join Query (εDJQ). These spatial queries involve two spatial data sets and a distance function to measure the degree of closeness, along with a given number of pairs in the final result (K) or a distance threshold (ε). In this paper, we propose four new plane-sweep-based algorithms for KCPQs and their extensions for εDJQs in the context of spatial databases, without the use of an index for any of the two disk-resident data sets (since, building and using indexes is not always in favor of processing performance). They employ a combination of plane-sweep algorithms and space partitioning techniques to join the data sets. Finally, we present results of an extensive experimental study, that compares the efficiency and effectiveness of the proposed algorithms for KCPQs and εDJQs. This performance study, conducted on medium and big spatial data sets (real and synthetic) validates that the proposed plane-sweep-based algorithms are very promising in terms of both efficient and effective measures, when neither inputs are indexed. Moreover, the best of the new algorithms is experimentally compared to the best algorithm that is based on the R-tree (a widely accepted access method), for KCPQs and εDJQs, using the same data sets. This comparison shows that the new algorithms outperform R-tree based algorithms, in most cases.  相似文献   

8.
Representative skyline computation is a fundamental issue in database area, which has attracted much attention in recent years. A notable definition of representative skyline is the distance-based representative skyline (DBRS). Given an integer k, a DBRS includes k representative skyline points that aims at minimizing the maximal distance between a non-representative skyline point and its nearest representative. In the 2D space, the state-of-the-art algorithm to compute the DBRS is based on dynamic programming (DP) which takes O(k m 2) time complexity, where m is the number of skyline points. Clearly, such a DP-based algorithm cannot be used for handling large scale datasets due to the quadratic time cost. To overcome this problem, in this paper, we propose a new approximate algorithm called ARS, and a new exact algorithm named PSRS, based on a carefully-designed parametric search technique. We show that the ARS algorithm can guarantee a solution that is at most ?? larger than the optimal solution. The proposed ARS and PSRS algorithms run in O(klog2mlog(T/??)) and O(k 2 log3m) time respectively, where T is no more than the maximal distance between any two skyline points. We also propose an improved exact algorithm, called PSRS+, based on an effective lower and upper bounding technique. We conduct extensive experimental studies over both synthetic and real-world datasets, and the results demonstrate the efficiency and effectiveness of the proposed algorithms.  相似文献   

9.
For each sufficiently large n, there exists a unary regular language L such that both L and its complement L c are accepted by unambiguous nondeterministic automata with at most n states, while the smallest deterministic automata for these two languages still require a superpolynomial number of states, at least \(e^{\Omega(\sqrt[3]{n\cdot\ln^{2}n})}\). Actually, L and L c are “balanced” not only in the number of states but, moreover, they are accepted by nondeterministic machines sharing the same transition graph, differing only in the distribution of their final states. As a consequence, the gap between the sizes of unary unambiguous self-verifying automata and deterministic automata is also superpolynomial.  相似文献   

10.
Considering the current price gap between hard disk and flash memory SSD storages, for applications dealing with large-scale data, it will be economically more sensible to use flash memory drives to supplement disk drives rather than to replace them. This paper presents FaCE, which is a new low-overhead caching strategy that uses flash memory as an extension to the RAM buffer of database systems. FaCE aims at improving the transaction throughput as well as shortening the recovery time from a system failure. To achieve the goals, we propose two novel algorithms for flash cache management, namely multi-version FIFO replacement and group second chance. This was possible due to flash write optimization as well as disk access reduction obtained by the FaCE caching methods. In addition, FaCE takes advantage of the nonvolatility of flash memory to fully support database recovery by extending the scope of a persistent database to include the data pages stored in the flash cache. We have implemented FaCE in the PostgreSQL open-source database server and demonstrated its effectiveness for TPC-C benchmarks in comparison with existing caching methods such as Lazy Cleaning and Linux Bcache.  相似文献   

11.
The performance of a linear error-detecting code in a symmetric memoryless channel is characterized by its probability of undetected error, which is a function of the channel symbol error probability, involving basic parameters of a code and its weight distribution. However, the code weight distribution is known for relatively few codes since its computation is an NP-hard problem. It should therefore be useful to have criteria for properness and goodness in error detection that do not involve the code weight distribution. In this work we give two such criteria. We show that a binary linear code C of length n and its dual code C of minimum code distance d are proper for error detection whenever d ≥ ?n/2? + 1, and that C is proper in the interval [(n + 1 ? 2d)/(n ? d); 1/2] whenever ?n/3? + 1 ≤ d ≤ ?n/2?. We also provide examples, mostly of Griesmer codes and their duals, that satisfy the above conditions.  相似文献   

12.
Many contemporary steganographic schemes aim to embed fixed-length secret message in the cover while minimizing the stego distortion. However, in some cases, the secret message sender requires to embed a variable-length secret payload within his expected stego security. This kind of problem is named as secure payload estimation (SPE). In this paper, we propose a practical SPE approach for individual cover. The stego security metric we adopt here is the detection error rate of steganalyzer (P E ). Our method is based on a priori knowledge functions, which are two kinds of functions to be determined before the estimation. The first function is the relation function of detection error rate and stego distortion (P E ? D function). The second function reflects the relationship between stego distortion and payload rate (D ? α) of the chosen cover. The P E ? D is the general knowledge, which is calculated from image library. On the other hand, D ? α is for specific cover, which is needed to be determined on site. The estimating procedure is as follows: firstly, the sender solves the distortion D under his expected P E via P E ? D, and then calculates the corresponding secure payload α via D ? α of the cover. For on-site operations, the most time-consuming part is calculating D ? α function for cover image, which costs 1 time of STC coding. Besides this, the rest on-site operations are solving single-variable formulas, which can be easily tackled. Our approach is an efficient and practical solution for SPE problem.  相似文献   

13.
Aggregate similarity search, also known as aggregate nearest-neighbor (Ann) query, finds many useful applications in spatial and multimedia databases. Given a group Q of M query objects, it retrieves from a database the objects most similar to Q, where the similarity is an aggregation (e.g., \({{\mathrm{sum}}}\), \(\max \)) of the distances between each retrieved object p and all the objects in Q. In this paper, we propose an added flexibility to the query definition, where the similarity is an aggregation over the distances between p and any subset of \(\phi M\) objects in Q for some support \(0< \phi \le 1\). We call this new definition flexible aggregate similarity search and accordingly refer to a query as a flexible aggregate nearest-neighbor ( Fann ) query. We present algorithms for answering Fann queries exactly and approximately. Our approximation algorithms are especially appealing, which are simple, highly efficient, and work well in both low and high dimensions. They also return near-optimal answers with guaranteed constant-factor approximations in any dimensions. Extensive experiments on large real and synthetic datasets from 2 to 74 dimensions have demonstrated their superior efficiency and high quality.  相似文献   

14.
15.
Hybrid systems represent an important and powerful formalism for modeling real-world applications such as embedded systems. A verification tool like SpaceEx is based on the exploration of a symbolic search space (the region space). As a verification tool, it is typically optimized towards proving the absence of errors. In some settings, e.g., when the verification tool is employed in a feedback-directed design cycle, one would like to have the option to call a version that is optimized towards finding an error trajectory in the region space. A recent approach in this direction is based on guided search. Guided search relies on a cost function that indicates which states are promising to be explored, and preferably explores more promising states first. In this paper, we propose an abstraction-based cost function based on coarse-grained space abstractions for guiding the reachability analysis. For this purpose, a suitable abstraction technique that exploits the flexible granularity of modern reachability analysis algorithms is introduced. The new cost function is an effective extension of pattern database approaches that have been successfully applied in other areas. The approach has been implemented in the SpaceEx model checker. The evaluation shows its practical potential.  相似文献   

16.
Nowadays, many real-world problems are encoded into SAT instances and efficiently solved by modern SAT solvers. These solvers, usually known as Conflict-Driven Clause Learning (CDCL) SAT solvers, include a variety of sophisticated techniques, such as clause learning, lazy data structures, conflict-based adaptive branching heuristics, or random restarts, among others. However, the reasons of their efficiency in solving real-world, or industrial, SAT instances are still unknown. The common wisdom in the SAT community is that these technique exploit some hidden structure of real-world problems.In this thesis, we characterize some important features of the underlying structure of industrial SAT instances. Namely, they are the community structure and the self-similar structure. We observe that most industrial SAT formulas, viewed as graphs, have these two properties. This means that (i) in a graph with a clear community structure, i.e. having high modularity, we can find a partition of its nodes into communities such that most edges connect nodes of the same community; and (ii) in a graph with a self-similar pattern, i.e. being fractal, its shape is kept after re-scalings, i.e., grouping sets of nodes into a single node. We also analyze how these structures are affected by the effects of CDCL techniques during the search.Using the previous structural studies, we propose three applications. First, we face the problem of generating pseudo-industrial random SAT instances using the notion of modularity. Our model generates instances similar to (classical) random SAT formulas when the modularity is low, but when this value is high, our model is also adequate to model realistic pseudo-industrial problems. Second, we propose a method based on the community structure of the instance to detect relevant learnt clauses. Our technique augments the original instance with this set of relevant clauses, and this results into an overall improvement of the efficiency of several state-of-the-art CDCL SAT solvers. Finally, we analyze the classification of industrial SAT instances into families using the previously analyzed structure features, and we compare them to other classifiers commonly used in portfolio SAT approaches.In summary, this dissertation extends the understandings of the structure of SAT instances, with the aim of better explaining the success of CDCL techniques and possibly improve them, and propose a number of applications based on this analysis of the underlying structure of SAT formulas.  相似文献   

17.
Choosing the best location for starting a business or expanding an existing enterprize is an important issue. A number of location selection problems have been discussed in the literature. They often apply the Reverse Nearest Neighbor as the criterion for finding suitable locations. In this paper, we apply the Average Distance as the criterion and propose the so-called k-most suitable locations (k-MSL) selection problem. Given a positive integer k and three datasets: a set of customers, a set of existing facilities, and a set of potential locations. The k-MSL selection problem outputs k locations from the potential location set, such that the average distance between a customer and his nearest facility is minimized. In this paper, we formally define the k-MSL selection problem and show that it is NP-hard. We first propose a greedy algorithm which can quickly find an approximate result for users. Two exact algorithms are then proposed to find the optimal result. Several pruning rules are applied to increase computational efficiency. We evaluate the algorithms’ performance using both synthetic and real datasets. The results show that our algorithms are able to deal with the k-MSL selection problem efficiently.  相似文献   

18.
Data mining for client behavior analysis has become increasingly important in business, however further analysis on transactions and sequential behaviors would be of even greater value, especially in the financial service industry, such as banking and insurance, government and so on. In a real-world business application of taxation debt collection, in order to understand the internal relationship between taxpayers’ sequential behaviors (payment, lodgment and actions) and compliance to their debt, we need to find the contrast sequential behavior patterns between compliant and non-compliant taxpayers. Contrast Patterns (CP) are defined as the itemsets showing the difference/discrimination between two classes/datasets (Dong and Li, 1999). However, the existing CP mining methods which can only mine itemset patterns, are not suitable for mining sequential patterns, such as time-ordered transactions in taxpayer sequential behaviors. Little work has been conducted on Contrast Sequential Pattern (CSP) mining so far. Therefore, to address this issue, we develop a CSP mining approach, e C S P, by using an effective CSP-tree structure, which improves the PrefixSpan tree (Pei et al., 2001) for mining contrast patterns. We propose some heuristics and interestingness filtering criteria, and integrate them into the CSP-tree seamlessly to reduce the search space and to find business-interesting patterns as well. The performance of the proposed approach is evaluated on three real-world datasets. In addition, we use a case study to show how to implement the approach to analyse taxpayer behaviour. The results show a very promising performance and convincing business value.  相似文献   

19.
Recently, Reverse k Nearest Neighbors (RkNN) queries, returning every answer for which the query is one of its k nearest neighbors, have been extensively studied on the database research community. But the RkNN query cannot retrieve spatio-textual objects which are described by their spatial location and a set of keywords. Therefore, researchers proposed a RSTkNN query to find these objects, taking both spatial and textual similarity into consideration. However, the RSTkNN query cannot control the size of answer set and to be sorted according to the degree of influence on the query. In this paper, we propose a new problem Ranked Reverse Boolean Spatial Keyword Nearest Neighbors query called Ranked-RBSKNN query, which considers both spatial similarity and textual relevance, and returns t answers with most degree of influence. We propose a separate index and a hybrid index to process such queries efficiently. Experimental results on different real-world and synthetic datasets show that our approaches achieve better performance.  相似文献   

20.
A basic property of one-dimensional surjective cellular automata (CA) is that any preimage of a spatially periodic configuration (SPC) is spatially periodic as well. This paper investigates the relationship between the periods of SPC and the periods of their preimages for various classes of CA. When the CA is only surjective and y is a SPC of least period p, the least periods of all preimages of y are multiples of p. By leveraging on the De Bruijn graph representation of CA, we devise a general algorithm to compute the least periods appearing in the preimages of a SPC, along with their corresponding multiplicities (i.e. how many preimages have a particular least period). Next, we consider the case of linear and bipermutive cellular automata (LBCA) defined over a finite field as state alphabet. In particular, we show an equivalence between preimages of LBCA and concatenated linear recurring sequences (LRS) that allows us to give a complete characterization of their periods. Finally, we generalize these results to LBCA defined over a finite ring as alphabet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号