首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many data mining algorithms are often found to be sensitive to the size of dataset. It may result in large memory requirements and very slow response time to execute tasks on large datasets. Thus, data reduction is an important issue in the field of data mining. This paper proposes a novel method for spatial point-data reduction. The main idea is to search a small subset of instances composed of border instances from original training set by using a modified pulse coupled neural network (PCNN) model. Original training instances are mapped into some pulse coupled neurons, and a firing algorithm is presented for determining which instances locate in border regions and filtering noisy instances. The reduced set maintains the main characteristics of original dataset and avoids the influence of noise, thus it can keep or even improve the quality of data mining results. The proposed method is a general data reduction algorithm, which can be used to improve classification, regression and clustering algorithms. The method achieves approximately linear time complexity, and can be used to process large spatial datasets. Experiments demonstrate that the proposed method is efficient and effective.  相似文献   

2.
Mining spatial colocation patterns: a different framework   总被引:2,自引:0,他引:2  
Recently, there has been considerable interest in mining spatial colocation patterns from large spatial datasets. Spatial colocation patterns represent the subsets of spatial events whose instances are often located in close geographic proximity. Most studies of spatial colocation mining require the specification of two parameter constraints to find interesting colocation patterns. One is a minimum prevalent threshold of colocations, and the other is a distance threshold to define spatial neighborhood. However, it is difficult for users to decide appropriate threshold values without prior knowledge of their task-specific spatial data. In this paper, we propose a different framework for spatial colocation pattern mining. To remove the first constraint, we propose the problem of finding N-most prevalent colocated event sets, where N is the desired number of colocated event sets with the highest interest measure values per each pattern size. We developed two alternative algorithms for mining the N-most patterns. They reduce candidate events effectively and use a filter-and-refine strategy for efficiently finding colocation instances from a spatial dataset. We prove the algorithms are correct and complete in finding the N-most prevalent colocation patterns. For the second constraint, a distance threshold for spatial neighborhood determination, we present various methods to estimate appropriate distance bounds from user input data. The result can help an user to set a distance for a conceptualization of spatial neighborhood. Our experimental results with real and synthetic datasets show that our algorithmic design is computationally effective in finding the N-most prevalent colocation patterns. The discovered patterns were different depending on the distance threshold, which shows that it is important to select appropriate neighbor distances.  相似文献   

3.
We study an important data analysis operator, which extracts the k most important groups from data (i.e., the k groups with the highest aggregate values). In a data warehousing context, an example of the above query is “find the 10 combinations of product-type and month with the largest sum of sales”. The problem is challenging as the potential number of groups can be much larger than the memory capacity. We propose on-demand methods for efficient top-k groups processing, under limited memory size. In particular, we design top-k groups retrieval techniques for three representative scenarios as follows. For the scenario with data physically ordered by measure, we propose the write-optimized multi-pass sorted access algorithm (WMSA), that exploits available memory for efficient top-k groups computation. Regarding the scenario with unordered data, we develop the recursive hash algorithm (RHA), which applies hashing with early aggregation, coupled with branch-and-bound techniques and derivation heuristics for tight score bounds of hash partitions. Next, we design the clustered groups algorithm (CGA), which accelerates top-k groups processing for the case where data is clustered by a subset of group-by attributes. Extensive experiments with real and synthetic datasets demonstrate the applicability and efficiency of the proposed algorithms.  相似文献   

4.
We present a highly scalable parallel solution for the Subset-Sum Problem on Graphics Processing Units (GPUs) that substantially reduces the memory access by the device and, consequently, decreases the total runtime. We test this algorithm only on hard instances, which require the exhaustion of the entire search space, instead of simple random benchmarks. On a GPU with only 1.2 GB of global memory, we address hard instances with 100,000 items limited to 106 and 200 items limited to 108. Our algorithm achieves excellent runtimes outperforming the best-known practical and parallel algorithms, reaching speed-ups higher than 1000 in the best case compared to its sequential version.  相似文献   

5.
Level set methods [Osher and Sethian. Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79 (1988) 12] have proved very successful for interface tracking in many different areas of computational science. However, current level set methods are limited by a poor balance between computational efficiency and storage requirements. Tree-based methods have relatively slow access times, whereas narrow band schemes lead to very large memory footprints for high resolution interfaces. In this paper we present a level set scheme for which both computational complexity and storage requirements scale with the size of the interface. Our novel level set data structure and algorithms are fast, cache efficient and allow for a very low memory footprint when representing high resolution level sets. We use a time-dependent and interface adapting grid dubbed the “Dynamic Tubular Grid” or DT-Grid. Additionally, it has been optimized for advanced finite difference schemes currently employed in accurate level set computations. As a key feature of the DT-Grid, the associated interface propagations are not limited to any computational box and can expand freely. We present several numerical evaluations, including a level set simulation on a grid with an effective resolution of 10243  相似文献   

6.
The size of spatial scientific datasets is steadily increasing due to improvements in instruments and availability of computational resources. However, much of the research on efficient storage and access to spatial datasets has focused on large multidimensional arrays. In contrast, unstructured grids consisting of collections of simplices (e.g. triangles or tetrahedra) present special challenges that have received less attention. Data values found at the vertices of the simplices may be dispersed throughout a datafile, producing especially poor disk locality.Our previous work has focused on addressing this locality problem. In this paper, we reorganize the unstructured grid to improve locality of disk access by maintaining the spatial neighborhood relationships inherent in the unstructured grid. This reorganization produces significant gains in performance by reducing the number of accesses made to the data file. We also examine the effects of different chunking configurations on data retrieval performance. A major motivation for reorganizing the unstructured grid is to allow the application of iteration aware prefetching. Applying this prefetching method to unstructured grids produces further performance gains over and above the gains seen from reorganization alone.The work presented in this journal contains at least 40% new material not included in our conference paper (Akande and Rhodes 2013).  相似文献   

7.
Instance selection is becoming increasingly relevant due to the huge amount of data that is constantly being produced in many fields of research. Although current algorithms are useful for fairly large datasets, scaling problems are found when the number of instances is in the hundreds of thousands or millions. When we face huge problems, scalability becomes an issue, and most algorithms are not applicable.Thus, paradoxically, instance selection algorithms are for the most part impracticable for the same problems that would benefit most from their use. This paper presents a way of avoiding this difficulty using several rounds of instance selection on subsets of the original dataset. These rounds are combined using a voting scheme to allow good performance in terms of testing error and storage reduction, while the execution time of the process is significantly reduced. The method is particularly efficient when we use instance selection algorithms that are high in computational cost. The proposed approach shares the philosophy underlying the construction of ensembles of classifiers. In an ensemble, several weak learners are combined to form a strong classifier; in our method several weak (in the sense that they are applied to subsets of the data) instance selection algorithms are combined to produce a strong and fast instance selection method.An extensive comparison of 30 medium and large datasets from the UCI Machine Learning Repository using 3 different classifiers shows the usefulness of our method. Additionally, the method is applied to 5 huge datasets (from three hundred thousand to more than a million instances) with good results and fast execution time.  相似文献   

8.
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.  相似文献   

9.
Dehne  Dittrich  Hutchinson 《Algorithmica》2008,36(2):97-122
Abstract. External memory (EM) algorithms are designed for large-scale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to relate the large body of work on parallel algorithms to EM, but with limited success. The combination of EM computing, on multiple disks, with multiprocessor parallelism has been posted as a challenge by the ACM Working Group on Storage I/ O for Large-Scale Computing. In this paper we provide a simulation technique which produces efficient parallel EM algorithms from efficient BSP-like parallel algorithms. The techniques obtained can accommodate one or multiple processors on the EM target machine, each with one or more disks, and they also adapt to the disk blocking factor of the target machine. When applied to existing BSP-like algorithms, our simulation technique produces improved parallel EM algorithms for a large number of problems.  相似文献   

10.
In recent years, high-utility itemset mining has emerged as an important data mining task. However, it remains computationally expensive both in terms of runtime and memory consumption. It is thus an important challenge to design more efficient algorithms for this task. In this paper, we address this issue by proposing a novel algorithm named EFIM (EFficient high-utility Itemset Mining), which introduces several new ideas to more efficiently discover high-utility itemsets. EFIM relies on two new upper bounds named revised sub-tree utility and local utility to more effectively prune the search space. It also introduces a novel array-based utility counting technique named Fast Utility Counting to calculate these upper bounds in linear time and space. Moreover, to reduce the cost of database scans, EFIM proposes efficient database projection and transaction merging techniques named High-utility Database Projection and High-utility Transaction Merging (HTM), also performed in linear time. An extensive experimental study on various datasets shows that EFIM is in general two to three orders of magnitude faster than the state-of-art algorithms \(\hbox {d}^2\)HUP, HUI-Miner, HUP-Miner, FHM and UP-Growth+ on dense datasets and performs quite well on sparse datasets. Moreover, a key advantage of EFIM is its low memory consumption.  相似文献   

11.
The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and are inefficient for solving clustering problems in large datasets. Recently, incremental approaches have been developed to resolve difficulties with the choice of starting points. The global k-means and the modified global k-means algorithms are based on such an approach. They iteratively add one cluster center at a time. Numerical experiments show that these algorithms considerably improve the k-means algorithm. However, they require storing the whole affinity matrix or computing this matrix at each iteration. This makes both algorithms time consuming and memory demanding for clustering even moderately large datasets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points lying in different parts of the dataset. We exploit information gathered in previous iterations of the incremental algorithm to eliminate the need of computing or storing the whole affinity matrix and thereby to reduce computational effort and memory usage. Results of numerical experiments on six standard datasets demonstrate that the new algorithm is more efficient than the global and the modified global k-means algorithms.  相似文献   

12.
Many recent sensor devices are being equipped with flash memories due to their unique advantages: non-volatile storage, small size, shock-resistance, fast read access and power efficiency. The ability of storing large amounts of data in sensor devices necessitates the need for efficient indexing structures to locate required information.The challenge with flash memories is that they are unsuitable for maintaining dynamic data structures because of their specific read, write and wear constraints; this combined with very limited data memory on sensor devices prohibits the direct application of most existing indexing methods.In this paper we propose a suite of index structures and algorithms which permit us to efficiently support several types of historical online queries on flash-equipped sensor devices: temporally constrained aggregate queries, historical online sampling queries and pattern matching queries. We have implemented our methods using nesC and have run extensive experiments in TOSSIM, the simulation environment of TinyOS. Our experimental evaluation using trace-driven real world data sets demonstrates the efficiency of our indexing algorithms.  相似文献   

13.
This paper presents a set of methods for time integration of problems arising from finite element semidiscretizations. The purpose is to obtain computationally efficient methods which possess higher-order accuracy and controllable dissipation in the spurious high modes. The methods are developed and analysed by a general collocation methodology which leads to the class of Nørsett approximants. An algorithmic parameter is used to achieve an effective control over numerical dissipation. Moreover, a simple and efficient implementation scheme is presented. At each time step, algorithms based on p-order collocation polynomials require the solution of p sets of linear algebraic equations with the same coefficient matrix. In this way, a single factorization is needed and no transformations are required to recover the approximate solution at the end or within the time interval. To demonstrate the performance of the proposed algorithms, a wide experimental evaluation is carried out on typical test problems in finite element transient analysis.  相似文献   

14.
Over the last few years, multiply-annotated data has become a very popular source of information. Online platforms such as Amazon Mechanical Turk have revolutionized the labelling process needed for any classification task, sharing the effort between a number of annotators (instead of the classical single expert). This crowdsourcing approach has introduced new challenging problems, such as handling disagreements on the annotated samples or combining the unknown expertise of the annotators. Probabilistic methods, such as Gaussian Processes (GP), have proven successful to model this new crowdsourcing scenario. However, GPs do not scale up well with the training set size, which makes them prohibitive for medium-to-large datasets (beyond 10K training instances). This constitutes a serious limitation for current real-world applications. In this work, we introduce two scalable and efficient GP-based crowdsourcing methods that allow for processing previously-prohibitive datasets. The first one is an efficient and fast approximation to GP with squared exponential (SE) kernel. The second allows for learning a more flexible kernel at the expense of a heavier training (but still scalable to large datasets). Since the latter is not a GP-SE approximation, it can be also considered as a whole new scalable and efficient crowdsourcing method, useful for any dataset size. Both methods use Fourier features and variational inference, can predict the class of new samples, and estimate the expertise of the involved annotators. A complete experimentation compares them with state-of-the-art probabilistic approaches in synthetic and real crowdsourcing datasets of different sizes. They stand out as the best performing approach for large scale problems. Moreover, the second method is competitive with the current state-of-the-art for small datasets.  相似文献   

15.
Flash memory efficient LTL model checking   总被引:1,自引:0,他引:1  
As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and much faster than magnetic disks in random reads, while in random writes they are generally not.So far, large-scale LTL model checking algorithms have been designed to employ external memory optimized for magnetic disks. We propose algorithms optimized for flash memory access. In contrast to approaches relying on the delayed detection of duplicate states, in this work, we design and exploit appropriate hash functions to re-invent immediate duplicate detection.For flash memory efficient on-the-fly LTL model checking, which aims at finding any counter-example to the specified LTL property, we study hash functions adapted to the two-level hierarchy of RAM and flash memory. For flash memory efficient off-line LTL model checking, which aims at generating a minimal counterexample and scans the entire state space at least once, we analyze the effect of outsourcing a memory-based perfect hash function from RAM to flash memory.Since the characteristics of flash memories are different to magnetic hard disks, the existing I/O complexity model is no longer sufficient. Therefore, we provide an extended model for the computation of the I/O complexity adapted to flash memories that has a better fit to the observed behavior of our algorithms.  相似文献   

16.
Due to its NP-hard nature, it is still difficult to find an optimal solution for instances of the binary knapsack problem as small as 100 variables. In this paper, we developed a three-level hyper-heuristic framework to generate algorithms for the problem. From elementary components and multiple sets of problem instances, algorithms are generated. The best algorithms are selected to go through a second step process, where they are evaluated with problem instances that differ in size and difficulty. The problem instances are generated according to methods that are found in the literature. In all of the larger problem instances, the generated algorithms have less than 1 % error with respect to the optimal solution. Additionally, generated algorithms are efficient, taking on average fractions of a second to find a solution for any instance, with a standard deviation of 1 s. In terms of structure, hyper-heuristic algorithms are compact in size compared with those in the literature, allowing an in-depth analysis of their structure and their presentation to the scientific world.  相似文献   

17.
The contradictory requirements of data privacy and data analysis have fostered the development of statistical disclosure control techniques. In this context, microaggregation is one of the most frequently used methods since it offers a good trade-off between simplicity and quality. Unfortunately, most of the currently available microaggregation algorithms have been devised to work with small datasets, while the size of current databases is constantly increasing. The usual way to tackle this problem is to partition large data volumes into smaller fragments that can be processed in reasonable time by available algorithms. This solution is applied at the cost of losing quality. In this paper, we revisited the computational needs of microaggregation showing that it can be reduced to two steps: sorting the dataset with regard to a vantage point and a set of k-nearest neighbors searches. Considering this new point of view, we propose three new efficient quality-preserving microaggregation algorithms based on k-nearest neighbors search techniques. We present a comparison of our approaches with the most significant strategies presented in the literature using three real very large datasets. Experimental results show that our proposals overcome previous techniques by keeping a better balance between performance and the quality of the anonymized dataset.  相似文献   

18.
Efficient processing of distance-based queries (DBQs) is of great importance in spatial databases due to the wide area of applications that may address such queries. The most representative and known DBQs are the K Nearest Neighbors Query (KNNQ), ρ Distance Range Query (ρDRQ), K Closest Pairs Query (KCPQ) and ρ Distance Join Query (ρDJQ). In this paper, we propose new pruning mechanism to apply them in the design of new Recursive Best-First Search (RBFS) algorithms for DBQs between spatial objects indexed in R-trees. RBFS is a general search algorithm that runs in linear space and expands nodes in best-first order, but it can suffer from node re-expansion overhead (i.e. to expand nodes in best-first order, some nodes can be considered more than once). The R-tree and its variations are commonly cited spatial access methods that can be used for answering such spatial queries. Moreover, an exhaustive experimental study was also included using R-trees, which resulted to several conclusions about the efficiency of proposed RBFS algorithm and its comparison with respect to other search algorithms (Best-First Search (BFS) and Depth-First Branch-and-Bound (DFBnB)), in terms of disk accesses, response time and main memory requirements, taking into account several important parameters as maximum branching factor (Cmax), cardinality of the final query result (K), distance threshold (ρ) and size of a global LRU buffer (B). In general RBFS is competitive for KNNQ and KCPQ where the maximum branching factor (Cmax) is large enough (even better than DFBnB and very close to BFS), and it is a good alternative when we have main memory limitations in our computer due to high process overload in our system, since it is linear space consuming with respect to the height of the R-trees. Nevertheless, RBFS is the worst alternative for ρDRQ and ρDJQ. DFBnB is also a linear space algorithm and it obtains the same behavior as BFS for ρDRQ and ρDJQ; and it is the best when an LRU buffer was included. Finally, we have been able to check experimentally that BFS is the best for all DBQs, but it can consume many main memory resources to perform spatial queries.  相似文献   

19.
The manufacturer's pallet loading problem consists in arranging, orthogonally and without overlapping, the maximum number of boxes with dimensions (l,w) or (w,l) onto a rectangular pallet with dimensions (L,W). This problem has been successfully handled by block heuristics, which generate loading patterns composed by one or more blocks where the boxes have the same orientation. A common feature of such methods is that the solutions provided are limited to the so-called first order non-guillotine patterns. In this paper we propose an approach based on the incorporation of simple tabu search (without longer-term memory structures) in block heuristics. Starting from an initial loading pattern, the algorithm performs moves that increase the size of selected blocks in the current pattern; as a result, other blocks are decreased, eliminated or created. Computational results indicate that the approach is capable of generating superior order optimal patterns for difficult instances reported in the literature.  相似文献   

20.
In Routing Problems the aim is to determine a minimum cost traversal over a graph satisfying some specified constraints. Most of them are NP-hard problems and many different heuristic solution algorithms have been proposed. The name Monte Carlo, MC, applies to a set of heuristic procedures with the common feature of using random numbers to simulate a given process. MC approach has not been applied to the framework of Routing Problems in the literature. The purpose of this paper is to demonstrate that MC methods could be useful in implementing heuristic algorithms for Routing Problems. In particular, we design an efficient MC heuristic algorithm for the well known Rural Postman Problem (RPP), for which we have a set of instances with known optimal solution taken from the literature.The Rural Postman Problem (RPP) consists of finding a minimum cost traversal of a specified arc subset of a graph. Given that the RPP is a NP-hard problem, heuristic algorithms are interesting both to handle large size instances and to provide upper bounds that could be used in branch and cut procedures. In this paper we propose a heuristic algorithm for the RPP based on Monte Carlo methods. We simulate a vehicle travelling randomly over the graph, jumping from one node to another on the basis of certain probabilities. Monte Carlo methods provide a simple approach to many different Routing Problems and they are easily implemented in a computer code. The application of this algorithm to a set of RPP instances taken from the literature demonstrates that, using the appropriate probabilities, they are also efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号