首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
We revisit two fundamental problems in database theory. The join-dependency (JD) testing problem is to determine whether a given JD holds on a relation r. We prove that the problem is NP-hard even if the JD involves only relations each of which has only two attributes. The JD-existence testing problem is to determine if there exists any non-trivial JD satisfied by r. We present an I/O-efficient algorithm in the external memory model, which in fact settles the closely related Loomis–Whitney enumeration problem. As a side product, we solve the triangle enumeration problem with the optimal I/O-complexity, improving a recent result of Pagh and Silvestri in PODS'14.  相似文献   

2.
The k-nearest neighbor (KNN) rule is a classical and yet very effective nonparametric technique in pattern classification, but its classification performance severely relies on the outliers. The local mean-based k-nearest neighbor classifier (LMKNN) was firstly introduced to achieve robustness against outliers by computing the local mean vector of k nearest neighbors for each class. However, its performances suffer from the choice of the single value of k for each class and the uniform value of k for different classes. In this paper, we propose a new KNN-based classifier, called multi-local means-based k-harmonic nearest neighbor (MLM-KHNN) rule. In our method, the k nearest neighbors in each class are first found, and then used to compute k different local mean vectors, which are employed to compute their harmonic mean distance to the query sample. Finally, MLM-KHNN proceeds in classifying the query sample to the class with the minimum harmonic mean distance. The experimental results, based on twenty real-world datasets from UCI and KEEL repository, demonstrated that the proposed MLM-KHNN classifier achieves lower classification error rate and is less sensitive to the parameter k, when compared to nine related competitive KNN-based classifiers, especially in small training sample size situations.  相似文献   

3.
It is known that under a wide variety of assumptions a database decomposition is lossless if and only if the database scheme has a lossless join. Biskup, Dayal and Bernstein (1979) have shown that when the given dependencies are functional, the database scheme has a lossless join if and only if one of the relation schemes is a key for the universal scheme. In this note we supply an alternative proof of that characterization. The proof uses tools from the theory of embedded join dependencies and the theory of tuple and equality generating dependencies, but is, nevertheless, much simpler than the previously published proof.  相似文献   

4.
Nearest neighbor (NN) rule is one of the simplest and the most important methods in pattern recognition. In this paper, we propose a kernel difference-weighted k-nearest neighbor (KDF-KNN) method for pattern classification. The proposed method defines the weighted KNN rule as a constrained optimization problem, and we then propose an efficient solution to compute the weights of different nearest neighbors. Unlike traditional distance-weighted KNN which assigns different weights to the nearest neighbors according to the distance to the unclassified sample, difference-weighted KNN weighs the nearest neighbors by using both the correlation of the differences between the unclassified sample and its nearest neighbors. To take into account the effective nonlinear structure information, we further extend difference-weighted KNN to its kernel version KDF-KNN. Our experimental results indicate that KDF-WKNN is much better than the original KNN and the distance-weighted KNN methods, and is comparable to or better than several state-of-the-art methods in terms of classification accuracy.  相似文献   

5.
In relational databases, a query can be formulated in terms of a relational algebra expression using projection, selection, restriction, cross product and union. In this paper, we consider a problem, called the membership problem, of determining whether a given dependency d is valid in a given relational expression E over a given database scheme R that is, whether every instance of the view scheme defined by E satisfies d (assuming that the underlying constraints in R are always satisfied).Consider the case where each relation scheme in R is associated with functional dependencies (FDs) as constraints, and d is an FD. Then the complement of the membership problem is NP-complete. However, if E contains no union, then the membership problem can be solved in polynomial time. Furthermore, if E contains neither a union nor a projection, then we can construct in polynomial time a cover for valid FDs in E, that is, a set of FDs which implies every valid FD in E.Consider the case where each relation scheme in R is associated with multivalued dependencies (MVDs) as well as FDs, and d is an FD or an MVD. Even if E consists of selections and cross products only, the membership problem is NP-hard. However, if E contains no union, and each relation scheme name in R occurs in E at most once, then the membership problem can be solved in polynomial time. As a corollary of this result, it can be determined in polynomial time whether a given FD or MVD is valid in R1???Rs, where R1,…,Rs are relation schemes with FDs and MVDs, and Ri?Rj is the natural join of Ri and Rj.  相似文献   

6.
The similarity join has become an important database primitive for supporting similarity searches and data mining. A similarity join combines two sets of complex objects such that the result contains all pairs of similar objects. Two types of the similarity join are well-known, the distance range join, in which the user defines a distance threshold for the join, and the closest pair query or k-distance join, which retrieves the k most similar pairs. In this paper, we propose an important, third similarity join operation called the k-nearest neighbour join, which combines each point of one point set with its k nearest neighbours in the other set. We discover that many standard algorithms of Knowledge Discovery in Databases (KDD) such as k-means and k-medoid clustering, nearest neighbour classification, data cleansing, postprocessing of sampling-based data mining, etc. can be implemented on top of the k-nn join operation to achieve performance improvements without affecting the quality of the result of these algorithms. We propose a new algorithm to compute the k-nearest neighbour join using the multipage index (MuX), a specialised index structure for the similarity join. To reduce both CPU and I/O costs, we develop optimal loading and processing strategies.  相似文献   

7.
Nearest Neighbor search is an important and widely used technique in a number of important application domains. In many of these domains, the dimensionality of the data representation is often very high. Recent theoretical results have shown that the concept of proximity or nearest neighbors may not be very meaningful for the high dimensional case. Therefore, it is often a complex problem to find good quality nearest neighbors in such data sets. Furthermore, it is also difficult to judge the value and relevance of the returned results. In fact, it is hard for any fully automated system to satisfy a user about the quality of the nearest neighbors found unless he is directly involved in the process. This is especially the case for high dimensional data in which the meaningfulness of the nearest neighbors found is questionable. In this paper, we address the complex problem of high dimensional nearest neighbor search from the user perspective by designing a system which uses effective cooperation between the human and the computer. The system provides the user with visual representations of carefully chosen subspaces of the data in order to repeatedly elicit his preferences about the data patterns which are most closely related to the query point. These preferences are used in order to determine and quantify the meaningfulness of the nearest neighbors. Our system is not only able to find and quantify the meaningfulness of the nearest neighbors, but is also able to diagnose situations in which the nearest neighbors found are truly not meaningful.
Charu C. AggarwalEmail:
  相似文献   

8.
Consider a second-order differential equation of the form y″ + ay ′ + by = 0 with a, b ϵ Q(x). Kovacic's algorithm tries to compute a solution of the associated Riccati equation that is algebraic and of minimal degree over (x). The coefficients of the monic irreducible polynomial of this solution are in C(x), where C is a finite algebraic extension of Q. In this paper we give a bound for the degree of the extension CQ. Similar results are obtained for third-order differential equations.  相似文献   

9.
In many advanced database applications (e.g., multimedia databases), data objects are transformed into high-dimensional points and manipulated in high-dimensional space. One of the most important but costly operations is the similarity join that combines similar points from multiple datasets. In this paper, we examine the problem of processing K-nearest neighbor similarity join (KNN join). KNN join between two datasets, R and S, returns for each point in R its K most similar points in S. We propose a new index-based KNN join approach using the iDistance as the underlying index structure. We first present its basic algorithm and then propose two different enhancements. In the first enhancement, we optimize the original KNN join algorithm by using approximation bounding cubes. In the second enhancement, we exploit the reduced dimensions of data space. We conducted an extensive experimental study using both synthetic and real datasets, and the results verify the performance advantage of our schemes over existing KNN join algorithms.  相似文献   

10.
The k nearest neighbor (k-NN) classifier has been a widely used nonparametric technique in Pattern Recognition, because of its simplicity and good performance. In order to decide the class of a new prototype, the k-NN classifier performs an exhaustive comparison between the prototype to classify and the prototypes in the training set T. However, when T is large, the exhaustive comparison is expensive. For this reason, many fast k-NN classifiers have been developed, some of them are based on a tree structure, which is created during a preprocessing phase using the prototypes in T. Then, in a search phase, the tree is traversed to find the nearest neighbor. The speed up is obtained, while the exploration of some parts of the tree is avoided using pruning rules which are usually based on the triangle inequality. However, in soft sciences as Medicine, Geology, Sociology, etc., the prototypes are usually described by numerical and categorical attributes (mixed data), and sometimes the comparison function for computing the similarity between prototypes does not satisfy metric properties. Therefore, in this work an approximate fast k most similar neighbor classifier, for mixed data and similarity functions that do not satisfy metric properties, based on a tree structure (Tree k-MSN) is proposed. Some experiments with synthetic and real data are presented.  相似文献   

11.
Finding k nearest neighbor objects in spatial databases is a fundamental problem in many geospatial systems and the direction is one of the key features of a spatial object. Moreover, the recent tremendous growth of sensor technologies in mobile devices produces an enormous amount of spatio-directional (i.e., spatially and directionally encoded) objects such as photos. Therefore, an efficient and proper utilization of the direction feature is a new challenge. Inspired by this issue and the traditional k nearest neighbor search problem, we devise a new type of query, called the direction-constrained k nearest neighbor (DCkNN) query. The DCkNN query finds k nearest neighbors from the location of the query such that the direction of each neighbor is in a certain range from the direction of the query. We develop a new index structure called MULTI, to efficiently answer the DCkNN query with two novel index access algorithms based on the cost analysis. Furthermore, our problem and solution can be generalized to deal with spatio-circulant dimensional (such as a direction and circulant periods of time such as an hour, a day, and a week) objects. Experimental results show that our proposed index structure and access algorithms outperform two adapted algorithms from existing kNN algorithms.  相似文献   

12.
The nearest neighbor classification method assigns an unclassified point to the class of the nearest case of a set of previously classified points. This rule is independent of the underlying joint distribution of the sample points and their classifications. An extension to this approach is the k-NN method, in which the classification of the unclassified point is made by following a voting criteria within the k nearest points.The method we present here extends the k-NN idea, searching in each class for the k nearest points to the unclassified point, and classifying it in the class which minimizes the mean distance between the unclassified point and the k nearest points within each class. As all classes can take part in the final selection process, we have called the new approach k Nearest Neighbor Equality (k-NNE).Experimental results we obtained empirically show the suitability of the k-NNE algorithm, and its effectiveness suggests that it could be added to the current list of distance based classifiers.  相似文献   

13.
Sketches are introduced as presentations of many-sorted algebraic theories and data types are described as initial algebras for such sketches. A construction A+ofB is described where A+ is a suitably structured sketch and B is a sketch. In this construction each operation from A+ operates on all the sorts of B. Many examples are given. The two main theoretical results are the theorem about the tensor product ⊗ for structured sketches such that A+of(B+of C)≃(A+B+)of C (Theorem 3.17), and the “homomorphism” theorem 4.2 which describes the operation of structured sketches on the fibred category of all models of all sketches.  相似文献   

14.
In this paper we discuss the application of the dynamic clustering feature of hash to a relational data base machine. By partitioning the relation using hash, large load reductions in join and set operations are realized. Several machine architectures based on hash are presented. We propose a data base machineGRACE which adopts a novel relational algebraic processing algorithm based on hash and sort. Whereas conventional logic-per-track machines perform poorly in a join dominant environment,GRACE can execute join efficiently inO(N+M/K) time, whereN andM are the cardinalities of two relations andK the number of memory banks.  相似文献   

15.
Nearest neighbor (NN) classification assumes locally constant class conditional probabilities, and suffers from bias in high dimensions with a small sample set. In this paper, we propose a novel cam weighted distance to ameliorate the curse of dimensionality. Different from the existing neighborhood-based methods which only analyze a small space emanating from the query sample, the proposed nearest neighbor classification using the cam weighted distance (CamNN) optimizes the distance measure based on the analysis of inter-prototype relationship. Our motivation comes from the observation that the prototypes are not isolated. Prototypes with different surroundings should have different effects in the classification. The proposed cam weighted distance is orientation and scale adaptive to take advantage of the relevant information of inter-prototype relationship, so that a better classification performance can be achieved. Experiments show that CamNN significantly outperforms one nearest neighbor classification (1-NN) and k-nearest neighbor classification (k-NN) in most benchmarks, while its computational complexity is comparable with that of 1-NN classification.  相似文献   

16.
For a finite alphabet ∑ we define a binary relation on \(2^{\Sigma *} \times 2^{2^{\Sigma ^* } } \) , called balanced immunity. A setB ? ∑* is said to be balancedC-immune (with respect to a classC ? 2Σ* of sets) iff, for all infiniteL εC, $$\mathop {\lim }\limits_{n \to \infty } \left| {L^{ \leqslant n} \cap B} \right|/\left| {L^{ \leqslant n} } \right| = \tfrac{1}{2}$$ Balanced immunity implies bi-immunity and in natural cases randomness. We give a general method to find a balanced immune set'B for any countable classC and prove that, fors(n) =o(t(n)) andt(n) >n, there is aB εSPACE(t(n)), which is balanced immune forSPACE(s(n)), both in the deterministic and nondeterministic case.  相似文献   

17.
In this paper, a new approach called ‘instance variant nearest neighbor’ approximates a regression surface of a function using the concept of k nearest neighbor. Instead of fixed k neighbors for the entire dataset, our assumption is that there are optimal k neighbors for each data instance that best approximates the original function by fitting the local regions. This approach can be beneficial to noisy datasets where local regions form data characteristics that are different from the major data clusters. We formulate the problem of finding such k neighbors for each data instance as a combinatorial optimization problem, which is solved by a particle swarm optimization. The particle swarm optimization is extended with a rounding scheme that rounds up or down continuous-valued candidate solutions to integers, a number of k neighbors. We apply our new approach to five real-world regression datasets and compare its prediction performance with other function approximation algorithms, including the standard k nearest neighbor, multi-layer perceptron, and support vector regression. We observed that the instance variant nearest neighbor outperforms these algorithms in several datasets. In addition, our new approach provides consistent outputs with five datasets where other algorithms perform poorly.  相似文献   

18.
Nearest neighbor editing aided by unlabeled data   总被引:1,自引:0,他引:1  
This paper proposes a novel method for nearest neighbor editing. Nearest neighbor editing aims to increase the classifier’s generalization ability by removing noisy instances from the training set. Traditionally nearest neighbor editing edits (removes/retains) each instance by the voting of the instances in the training set (labeled instances). However, motivated by semi-supervised learning, we propose a novel editing methodology which edits each training instance by the voting of all the available instances (both labeled and unlabeled instances). We expect that the editing performance could be boosted by appropriately using unlabeled data. Our idea relies on the fact that in many applications, in addition to the training instances, many unlabeled instances are also available since they do not need human annotation effort. Three popular data editing methods, including edited nearest neighbor, repeated edited nearest neighbor and All k-NN are adopted to verify our idea. They are tested on a set of UCI data sets. Experimental results indicate that all the three editing methods can achieve improved performance with the aid of unlabeled data. Moreover, the improvement is more remarkable when the ratio of training data to unlabeled data is small.  相似文献   

19.
S. Q. Zhu 《Computing》1995,54(3):251-272
This paper deals with numerical methods for solving linear variational inequalities on an arbitrary closed convex subsetC of ? n . Although there were numerous iterations studied for the caseC=? + n , few were proposed for the case whenC is a closed convex subset. The essential difficulty in this case is the nonlinearities ofC's boundaries. In this paper iteration processes are designed for solving linear variational inequalities on an arbitrary closed convex subsetC. In our algorithms the computation of a linear variational inequality is decomposed into a sequence of problems of projecting a vector to the closed convex subsetC, which are computable as long as the equations describing the boundaries are given. In particular, using our iterations one can easily compute a solution whenC is one of the common closed convex subsets such as cube, ball, ellipsoid, etc. The non-accurate iteration, the estimate of the solutions on unbounded domains and the theory of approximating the boundaries are also established. Moreover, a necessary and sufficient condition is given for a vector to be an approximate solution. Finally, some numerical examples are presented, which show that the designed algorithms are effective and efficient. The exposition of this paper is self-contained.  相似文献   

20.
This article presents a novel type of queries in spatial databases, called the direction-aware bichromatic reverse k nearest neighbor(DBRkNN) queries, which extend the bichromatic reverse nearest neighbor queries. Given two disjoint sets, P and S, of spatial objects, and a query object q in S, the DBRkNN query returns a subset P′ of P such that k nearest neighbors of each object in P′ include q and each object in P′ has a direction toward q within a pre-defined distance. We formally define the DBRkNN query, and then propose an efficient algorithm, called DART, for processing the DBRkNN query. Our method utilizes a grid-based index to cluster the spatial objects, and the B+-tree to index the direction angle. We adopt a filter-refinement framework that is widely used in many algorithms for reverse nearest neighbor queries. In the filtering step, DART eliminates all the objects that are away from the query object more than a pre-defined distance, or have an invalid direction angle. In the refinement step, remaining objects are verified whether the query object is actually one of the k nearest neighbors of them. As a major extension of DART, we also present an improved algorithm, called DART+, for DBRkNN queries. From extensive experiments with several datasets, we show that DART outperforms an R-tree-based naive algorithm in both indexing time and query processing time. In addition, our extension algorithm, DART+, also shows significantly better performance than DART.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号