首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider tree series transducers which were introduced in [EFV], and define the tree-to-tree series transformations computed by them in two different ways. One of the definitions is based on the
-substitution of tree series taken from [EFV] while the other one is based on a new tree series substitution introduced in this paper. This new substitution is called
-substitution and the main difference between the
- and the
-substitutions is that the first one does not take into account the number of the occurrences of the substitution variables while the second one does. We compare the two different ways of computing tree-to-tree series transformations and show that, for the
-substitution, fundamental relations from the theory of tree transducers carry over to tree series transducers.  相似文献   

2.
Punnen  Margot  Kabadi 《Algorithmica》2008,35(2):111-127
   Abstract. We show that the 2-Opt and 3-Opt heuristics for the traveling salesman problem (TSP) on the complete graph K n produce a solution no worse than the average cost of a tour in K n in a polynomial number of iterations. As a consequence, we get that the domination numbers of the 2- Opt , 3- Opt , Carlier—Villon, Shortest Path Ejection Chain, and Lin—Kernighan heuristics are all at least (n-2)! / 2 . The domination number of the Christofides heuristic is shown to be no more than
, and for the Double Tree heuristic and a variation of the Christofides heuristic the domination numbers are shown to be one (even if the edge costs satisfy the triangle inequality). Further, unless P = NP, no polynomial time approximation algorithm exists for the TSP on the complete digraph
with domination number at least (n-1)!-k for any constant k or with domination number at least (n-1)! - (( k /(k+1))(n+r))!-1 for any non-negative constants r and k such that (n+r)
0 mod (k+1). The complexities of finding the median value of costs of all the tours in
and of similar problems are also studied.  相似文献   

3.
   Abstract. We consider a simple restriction of the PRAM model (called PPRAM), where the input is arbitrarily partitioned between a fixed set of p processors and the shared memory is restricted to m cells. This model allows for investigation of the tradeoffs/ bottlenecks with respect to the communication bandwidth (modeled by the shared memory size m ) and the number of processors p . The model is quite simple and allows the design of optimal algorithms without losing the effect of communication bottlenecks. We have focused on the PPRAM complexity of problems that have
(n) sequential solutions (where n is the input size), and where m ≤ p ≤ n . We show essentially tight time bounds (up to logarithmic factors) for several problems in this model such as summing, Boolean threshold, routing, integer sorting, list reversal and k -selection. We get typically two sorts of complexity behaviors for these problems: One type is
(n/p + p/m) , which means that the time scales with the number of processors and with memory size (in appropriate ranges) but not with both. The other is
(n/m) , which means that the running time does not scale with p and reflects a communication bottleneck (as long as m < p ). We are not aware of any problem whose complexity scales with both p and m (e.g.
). This might explain why in actual implementations one often fails to get p -scalability for p close to n .  相似文献   

4.
Cohen  Kaplan  Zwick 《Algorithmica》2002,33(4):511-516
   Abstract. We present a competitive analysis of the LRFU paging algorithm, a hybrid of the LRU (Least Recently Used) and LFU (Least Frequently Used) paging algorithms. We show that the competitive ratio of LRFU is k +
log(1-λ ) / logλ
- 1, where 1/2 < λ 1 is the decay parameter used by the LRFU algorithm, and k is the size of the cache. This supplies, in particular, the first natural paging algorithms that are competitive but are not optimally competitive, answering a question of Borodin and El-Yaniv. Although LRFU, as it turns out, is not optimally competitive, it is expected to behave well in practice, as it combines the benefits of both LRU and LFU.  相似文献   

5.
Iwata 《Algorithmica》2008,36(4):331-341
   Abstract. This paper presents a new algorithm for computing the maximum degree δ k (A) of a minor of order k in a matrix pencil A(s) . The problem is of practical significance in the field of numerical analysis and systems control. The algorithm adopts a general framework of ``combinatorial relaxation' due to Murota. It first solves the weighted bipartite matching problem to obtain an estimate
on δ k (A) , and then checks if the estimate is correct, exploiting the optimal dual solution. In case of incorrectness, it modifies the matrix pencil A(s) to improve the estimate
without changing δ k (A) . The present algorithm performs this matrix modification by an equivalence transformation with constant matrices, whereas the previous one uses biproper rational function matrices. Thus the present approach saves memory space and reduces the running time bound by a factor of rank A .  相似文献   

6.
Hoyer  Neerbek  Shi 《Algorithmica》2008,34(4):429-448
   Abstract. We consider the quantum complexities of the following three problems: searching an ordered list, sorting an un-ordered list, and deciding whether the numbers in a list are all distinct. Letting N be the number of elements in the input list, we prove a lower bound of (1/π )(ln(N )-1) accesses to the list elements for ordered searching, a lower bound of Ω(N logN ) binary comparisons for sorting, and a lower bound of
binary comparisons for element distinctness. The previously best known lower bounds are 1/12 log 2 (N) - O (1) due to Ambainis, Ω(N) , and
, respectively. Our proofs are based on a weighted all-pairs inner product argument. In addition to our lower bound results, we give an exact quantum algorithm for ordered searching using roughly 0.631 log 2 (N) oracle accesses. Our algorithm uses a quantum routine for traversing through a binary search tree faster than classically, and it is of a nature very different {from} a faster exact algorithm due to Farhi, Goldstone, Gutmann, and Sipser.  相似文献   

7.
Makino  Yamashita  Kameda 《Algorithmica》2008,34(3):240-260
   Abstract. Given a graph G=(V,E) and a set of vertices M
V , a vertex v ∈ V is said to be controlled by M if the majority of v 's neighbors (including itself) belong to M . M is called a monopoly in G if every vertex v∈ V is controlled by M . For a specified M and a given range for edge set E (E 1
E
E 2 ), we try to determine an E such that M is a monopoly in G=(V,E) . We first present a polynomial algorithm for testing if such an E exists, by formulating it as a network flow problem. Assuming that a solution for E does exist, we then show that solutions with the maximum and minimum |E| , respectively, can be found in polynomial time, by solving weighted matching problems. In case there is no solution for E , we want to maximize the number of vertices controlled by the given M . Unfortunately, this problem turns out to be NP-hard. We, therefore, design a simple approximation algorithm which guarantees an approximation ratio of 2 .  相似文献   

8.
This paper describes the simulated car racing competition that was arranged as part of the 2007 IEEE Congress on Evolutionary Computation. Both the game that was used as the domain for the competition, the controllers submitted as entries to the competition and its results are presented. With this paper, we hope to provide some insight into the efficacy of various computational intelligence methods on a well-defined game task, as well as an example of one way of running a competition. In the process, we provide a set of reference results for those who wish to use the simplerace game to benchmark their own algorithms. The paper is co-authored by the organizers and participants of the competition.
Julian Togelius (Corresponding author)Email:
Simon LucasEmail:
Ho Duc ThangEmail:
Jonathan M. GaribaldiEmail:
Tomoharu NakashimaEmail:
Chin Hiong TanEmail:
Itamar ElhananyEmail:
Shay BerantEmail:
Philip HingstonEmail:
Robert M. MacCallumEmail:
Thomas HaferlachEmail:
Aravind GowrisankarEmail:
Pete BurrowEmail:
  相似文献   

9.
We present a study of using camera-phones and visual-tags to access mobile services. Firstly, a user-experience study is described in which participants were both observed learning to interact with a prototype mobile service and interviewed about their experiences. Secondly, a pointing-device task is presented in which quantitative data was gathered regarding the speed and accuracy with which participants aimed and clicked on visual-tags using camera-phones. We found that participants’ attitudes to visual-tag-based applications were broadly positive, although they had several important reservations about camera-phone technology more generally. Data from our pointing-device task demonstrated that novice users were able to aim and click on visual-tags quickly (well under 3 s per pointing-device trial on average) and accurately (almost all meeting our defined speed/accuracy tradeoff of 6% error-rate). Based on our findings, design lessons for camera-phone and visual-tag applications are presented.
Eleanor Toye (Corresponding author)Email:
Richard SharpEmail:
Anil MadhavapeddyEmail:
David ScottEmail:
Eben UptonEmail:
Alan BlackwellEmail:
  相似文献   

10.
F.-R. Lin 《Calcolo》2003,40(4):231-248
In this paper, we consider the numerical solution of Fredholm integral equations of the second kind:
Discretizing the integral equation by a certain quadrature rule, we get the linear system
where I is the identity matrix, A is the discretization matrix corresponding to the kernel function a(x,t), and W is a diagonal matrix which depends on the quadrature rule. We propose an approximation scheme based on the polynomial interpolation technique and use the scheme to compute approximation matrices Aa of A and matrices Ba such that (I+BaW)(I-AaW) I for sufficiently large N, where N is the number of quadrature points used in the discretization. The approximations Aa and Ba, and the matrix-vector multiplications and , are obtained in O(N) operations by using the approximation scheme. Hence preconditioned iterative methods such as the preconditioned conjugate gradient method and the residual correction scheme are well suited for the solution of the preconditioned system
  相似文献   

11.
A new version of XtalOpt, a user-friendly GPL-licensed evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XtalOpt website, http://xtalopt.openmolecules.net. The new version now supports four external geometry optimization codes (VASP, GULP, PWSCF, and CASTEP), as well as three queuing systems: PBS, SGE, SLURM, and “Local”. The local queuing system allows the geometry optimizations to be performed on the user?s workstation if an external computational cluster is unavailable. Support for the Windows operating system has been added, and a Windows installer is provided. Numerous bugfixes and feature enhancements have been made in the new release as well.

New version program summary

Program title:XtalOptCatalogue identifier: AEGX_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GPL v2.1 or later [1]No. of lines in distributed program, including test data, etc.: 125 383No. of bytes in distributed program, including test data, etc.: 11 607 415Distribution format: tar.gzProgramming language: C++Computer: PCs, workstations, or clustersOperating system: Linux, MS WindowsClassification: 7.7External routines: Qt [2], Open Babel [3], Avogadro [4], and one of: VASP [5], PWSCF [6], GULP [7], CASTEP [8]Catalogue identifier of previous version: AEGX_v1_0Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 372Does the new version supersede the previous version?: YesNature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics.Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum of a crystalline structure on its potential energy surface. Our evolutionary algorithm, XtalOpt, is freely available for use and collaboration under the GNU Public License. See the original publication on XtalOpt?s implementation [11] for more information on the method.Reasons for new version: Since XtalOpt?s initial release in June 2010, support for additional optimizers, queuing systems, and an operating system has been added. XtalOpt can now use VASP, GULP, PWSCF, or CASTEP to perform local geometry optimizations. The queue submission code has been rewritten, and now supports running any of the above codes on ssh-accessible computer clusters that use the Portable Batch System (PBS), Sun Grid Engine (SGE), or SLURM queuing systems for managing the optimization jobs. Alternatively, geometry optimizations may be performed on the user?s workstation using the new internal “Local” queuing system if high performance computing resources are unavailable. XtalOpt has been built and tested on the Microsoft Windows operating system (XP or later) in addition to Linux, and a Windows installer is provided. The installer includes a development version of Avogadro that contains expanded crystallography support [12] that is not available in the mainline Avogadro releases. Other notable new developments include:
  • • 
    LIBSSH [10] is distributed with the XtalOpt sources and used for communication with the remote clusters, eliminating the previous requirement to set up public-key authentication;
  • • 
    Plotting enthalpy (or energy) vs. structure number in the plot tab will trace out the history of the most stable structure as the search progresses A read-only mode has been added to allow inspection of previous searches through the user interface without connecting to a cluster or submitting new jobs;
  • • 
    The tutorial [13] has been rewritten to reflect the changes to the interface and the newly supported codes. Expanded sections on optimizations schemes and save/resume have been added;
  • • 
    The included version of SPGLIB has been updated. An option has been added to set the Cartesian tolerance of the space group detection. A new option has been added to the Progress table?s right-click menu that copies the selected structure?s POSCAR formatted representation to the clipboard;
  • • 
    Numerous other small bugfixes/enhancements.
Summary of revisions: See “Reasons for new version” above.Running time: User dependent. The program runs until stopped by the user.References:
  •  [1] 
    http://www.gnu.org/licenses/gpl.html.
  •  [2] 
    http://www.trolltech.com/.
  •  [3] 
    http://openbabel.org/.
  •  [4] 
    http://avogadro.openmolecules.net.
  •  [5] 
    http://cms.mpi.univie.ac.at/vasp.
  •  [6] 
    http://www.quantum-espresso.org.
  •  [7] 
    https://www.ivec.org/gulp.
  •  [8] 
    http://www.castep.org.
  •  [9] 
    http://spglib.sourceforge.net.
  • [10] 
    http://www.libssh.org.
  • [11] 
    D. Lonie, E. Zurek, Comp. Phys. Comm. 182 (2011) 372–387, doi:10.1016/j.cpc.2010.07.048.
  • [12] 
    http://davidlonie.blogspot.com/2011/03/new-avogadro-crystallography-extension.html.
  • [13] 
    http://xtalopt.openmolecules.net/globalsearch/docs/tut-xo.html.
  相似文献   

12.
The program FIESTA has been completely rewritten. Now it can be used not only as a tool to evaluate Feynman integrals numerically, but also to expand Feynman integrals automatically in limits of momenta and masses with the use of sector decompositions and Mellin–Barnes representations. Other important improvements to the code are complete parallelization (even to multiple computers), high-precision arithmetics (allowing to calculate integrals which were undoable before), new integrators, Speer sectors as a strategy, the possibility to evaluate more general parametric integrals.

Program summary

Program title:FIESTA 2Catalogue identifier: AECP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPL version 2No. of lines in distributed program, including test data, etc.: 39 783No. of bytes in distributed program, including test data, etc.: 6 154 515Distribution format: tar.gzProgramming language: Wolfram Mathematica 6.0 (or higher) and CComputer: From a desktop PC to a supercomputerOperating system: Unix, Linux, Windows, Mac OS XHas the code been vectorised or parallelized?: Yes, the code has been parallelized for use on multi-kernel computers as well as clusters via Mathlink over the TCP/IP protocol. The program can work successfully with a single processor, however, it is ready to work in a parallel environment and the use of multi-kernel processor and multi-processor computers significantly speeds up the calculation; on clusters the calculation speed can be improved even further.RAM: Depends on the complexity of the problemClassification: 4.4, 4.12, 5, 6.5Catalogue identifier of previous version: AECP_v1_0Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 735External routines: QLink [1], Cuba library [2], MPFR [3]Does the new version supersede the previous version?: YesNature of problem: The sector decomposition approach to evaluating Feynman integrals falls apart into the sector decomposition itself, where one has to minimize the number of sectors; the pole resolution and epsilon expansion; and the numerical integration of the resulting expression.Solution method: The sector decomposition is based on a new strategy as well as on classical strategies such as Speer sectors. The sector decomposition, pole resolution and epsilon-expansion are performed in Wolfram Mathematica 6.0 or, preferably, 7.0 (enabling parallelization) [4]. The data is stored on hard disk via a special program, QLink [1]. The expression for integration is passed to the C-part of the code, that parses the string and performs the integration by one of the algorithms in the Cuba library package [2]. This part of the evaluation is perfectly parallelized on multi-kernel computers.Reasons for new version:
  • 1. 
    The first version of FIESTA had problems related to numerical instability, so for some classes of integrals it could not produce a result.
  • 2. 
    The sector decomposition method can be applied not only for integral calculation.
Summary of revisions:
  • 1. 
    New integrator library is used.
  • 2. 
    New methods to deal with numerical instability (MPFR library).
  • 3. 
    Parallelization in Mathematica.
  • 4. 
    Parallelization on multiple computers via TCP-IP.
  • 5. 
    New sector decomposition strategy (Speer sectors).
  • 6. 
    Possibility of using FIESTA to for integral expansion.
  • 7. 
    Possibility of using FIESTA to discover poles in d.
  • 8. 
    New negative terms resolution strategies.
Restrictions: The complexity of the problem is mostly restricted by CPU time required to perform the evaluation of the integralRunning time: Depends on the complexity of the problemReferences:
  • [1] 
    http://qlink08.sourceforge.net, open source.
  • [2] 
    http://www.feynarts.de/cuba/, open source.
  • [3] 
    http://www.mpfr.org/, open source.
  • [4] 
    http://www.wolfram.com/products/mathematica/index.html.
  相似文献   

13.
This work revisits an idea that dates back to the early days of scientific computing, the energy method for stability analysis. It is shown that if the scalar non-linear conservation law
is approximated by the semi-discrete conservative scheme
then the energy of the discrete solution evolves at exactly the same rate as the energy of the true solution, provided that the numerical flux is evaluated by the formula
where
With careful treatment of the boundary conditions, this provides a path to the construction of non-dissipative stable discretizations of the governing equations. If shock waves appear in the solution, the discretization must be augmented by appropriate shock operators to account for the dissipation of energy by the shock waves. These results are extended to systems of conservation laws, including the equations of incompressible flow, and gas dynamics. In the case of viscous flow, it is also shown that shock waves can be fully resolved by non-dissipative discretizations of this type with a fine enough mesh, such that the cell Reynolds number ≤2.  相似文献   

14.
Quantitative usability requirements are a critical but challenging, and hence an often neglected aspect of a usability engineering process. A case study is described where quantitative usability requirements played a key role in the development of a new user interface of a mobile phone. Within the practical constraints of the project, existing methods for determining usability requirements and evaluating the extent to which these are met, could not be applied as such, therefore tailored methods had to be developed. These methods and their applications are discussed.
Timo Jokela (Corresponding author)Email:
Jussi KoivumaaEmail:
Jani PirkolaEmail:
Petri SalminenEmail:
Niina KantolaEmail:
  相似文献   

15.
16.
Ohne Zusammenfassung
Peter Rohner (Corresponding author)Email:
Robert Winter (Corresponding author)Email:
  相似文献   

17.
The implementation and testing of XtalOpt, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XtalOpt are optimized using a novel benchmarking scheme. XtalOpt is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface.

Program summary

Program title:XtalOptCatalogue identifier: AEGX_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPL v2.1 or later [1]No. of lines in distributed program, including test data, etc.: 36 849No. of bytes in distributed program, including test data, etc.: 1 149 399Distribution format: tar.gzProgramming language: C++Computer: PCs, workstations, or clustersOperating system: LinuxClassification: 7.7External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7].Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics.Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XtalOpt, is freely available to the scientific community for use and collaboration under the GNU Public License.Running time: User dependent. The program runs until stopped by the user.References:
  • [1] 
    http://www.gnu.org/licenses/gpl.html.
  • [2] 
    http://www.trolltech.com/.
  • [3] 
    http://openbabel.org/.
  • [4] 
    http://avogadro.openmolecules.net.
  • [5] 
    http://cms.mpi.univie.ac.at/vasp.
  • [6] 
    http://www.quantum-espresso.org.
  • [7] 
    https://www.ivec.org/gulp.
  • [8] 
    http://spglib.sourceforge.net.
  相似文献   

18.
There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to “responsibility and autonomous robots”, “machines as a replacement for humans”, and “tele-presence”. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of robotics, but we will demonstrate that there are legal challenges with regard to these issues.
Michael Nagenborg (Corresponding author)Email: URL: www.michaelnagenborg.de
Rafael CapurroEmail:
Jutta WeberEmail:
Christoph PingelEmail:
  相似文献   

19.
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states.

Program summary

Program title: dmftCatalogue identifier: AEIL_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: ALPS LIBRARY LICENSE version 1.1No. of lines in distributed program, including test data, etc.: 899 806No. of bytes in distributed program, including test data, etc.: 32 153 916Distribution format: tar.gzProgramming language: C++Operating system: The ALPS libraries have been tested on the following platforms and compilers:
  • • 
    Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher)
  • • 
    MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0)
  • • 
    IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers
  • • 
    Compaq Tru64 UNIX with Compq C++ Compiler (cxx)
  • • 
    SGI IRIX with MIPSpro C++ Compiler (CC)
  • • 
    HP-UX with HP C++ Compiler (aCC)
  • • 
    Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher)
RAM: 10 MB–1 GBClassification: 7.3External routines: ALPS [1], BLAS/LAPACK, HDF5Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of “correlated electron” materials as auxiliary problems whose solution gives the “dynamical mean field” approximation to the self-energy and local correlation functions.Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2].Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper.Running time: 60 s–8 h per iteration.References:
  • [1] 
    A. Albuquerque, F. Alet, P. Corboz, et al., J. Magn. Magn. Mater. 310 (2007) 1187.
  • [2] 
    http://arxiv.org/abs/1012.4474, Rev. Mod. Phys., in press.
  相似文献   

20.
Nowadays data mining plays an important role in decision making. Since many organizations do not possess the in-house expertise of data mining, it is beneficial to outsource data mining tasks to external service providers. However, most organizations hesitate to do so due to the concern of loss of business intelligence and customer privacy. In this paper, we present a Bloom filter based solution to enable organizations to outsource their tasks of mining association rules, at the same time, protect their business intelligence and customer privacy. Our approach can achieve high precision in data mining by trading-off the storage requirement. This research was supported by the USA National Science Foundation Grants CCR-0310974 and IIS-0546027.
Ling Qiu (Corresponding author)Email:
Yingjiu LiEmail:
Xintao WuEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号