首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the cover time of multiple random walks on undirected graphs G=(V,E). We consider k parallel, independent random walks that start from the same vertex. The speed-up is defined as the ratio of the cover time of a single random walk to the cover time of these k random walks. Recently, Alon et al. (2008) [5] derived several upper bounds on the cover time, which imply a speed-up of Ω(k) for several graphs; however, for many of them, k has to be bounded by O(logn). They also conjectured that, for any 1?k?n, the speed-up is at most O(k) on any graph. We prove the following main results:
We present a new lower bound on the speed-up that depends on the mixing time. It gives a speed-up of Ω(k) on many graphs, even if k is as large as n.
We prove that the speed-up is O(klogn) on any graph. For a large class of graphs we can also improve this bound to O(k), matching the conjecture of Alon et al.
We determine the order of the speed-up for any value of 1?k?n on hypercubes, random graphs and degree restricted expanders. For d-dimensional tori with d>2, our bounds are tight up to logarithmic factors.
Our findings also reveal a surprisingly sharp threshold behaviour for certain graphs, e.g., the d-dimensional torus with d>2 and hypercubes: there is a value T such that the speed-up is approximately min{T,k} for any 1?k?n.
  相似文献   

2.
The rotor-router mechanism was introduced as a deterministic alternative to the random walk in undirected graphs. In this model, a set of k identical walkers is deployed in parallel, starting from a chosen subset of nodes, and moving around the graph in synchronous steps. During the process, each node successively propagates walkers visiting it along its outgoing arcs in round-robin fashion, according to a fixed ordering. We consider the cover time of such a system, i.e., the number of steps after which each node has been visited by at least one walk, regardless of the initialization of the walks. We show that for any graph with m edges and diameter D, this cover time is at most Θ(mD/logk) and at least Θ(mD/k), which corresponds to a speedup of between Θ(logk) and Θ(k) with respect to the cover time of a single walk.  相似文献   

3.
4.
We investigate important combinatorial and algorithmic properties of Gn,m,pGn,m,p random intersection graphs. In particular, we prove that with high probability (a) random intersection graphs are expanders, (b) random walks on such graphs are “rapidly mixing” (in particular they mix in logarithmic time) and (c) the cover time of random walks on such graphs is optimal (i.e. it is Θ(nlogn)Θ(nlogn)). All results are proved for pp very close to the connectivity threshold and for the interesting, non-trivial range where random intersection graphs differ from classical Gn,pGn,p random graphs.  相似文献   

5.
The analysis of complex networks is of major interest in various fields of science. In many applications we face the challenge that the exact topology of a network is unknown but we are instead given information about distances within this network. The theoretical approaches to this problem have so far been focusing on the reconstruction of graphs from shortest path distance matrices. Often, however, movements in networks do not follow shortest paths but occur in a random fashion. In these cases an appropriate distance measure can be defined as the mean length of a random walk between two nodes — a quantity known as the mean first hitting time.In this contribution we investigate whether a graph can be reconstructed from its mean first hitting time matrix and put forward an algorithm for solving this problem. A heuristic method to reduce the computational effort is described and analyzed. In the case of trees we can even give an algorithm for reconstructing graphs from incomplete random walk distance matrices.  相似文献   

6.
We consider discrete-time nearest-neighbor quantum walks on random environments in one dimension. Using the method based on a path counting, we present both quenched and annealed weak limit theorems for the quantum walk.  相似文献   

7.
《Computer Networks》1999,31(11-16):1291-1303
Recent research has studied how to measure the size of a search engine, in terms of the number of pages indexed. In this paper, we consider a different measure for search engines, namely the quality of the pages in a search engine index. We provide a simple, effective algorithm for approximating the quality of an index by performing a random walk on the Web, and we use this methodology to compare the index quality of several major search engines.  相似文献   

8.
9.
10.
基于随机游走的医学超声图像分割   总被引:1,自引:0,他引:1       下载免费PDF全文
医学超声图像不可避免地存在斑点噪声、弱边界等问题,很难达到满意的分割效果。随机游走算法对噪声具有鲁棒性,对弱边界有良好的提取能力。将此算法应用于医学超声图像分割,通过融合区域信息与用户指定的种子点信息,借助于电路模拟以及组合Dirichlet问题,可以得到每个非种子点到标记了目标点或者背景点的概率,并对其赋予概率中最大的种子点所对应的标记,从而实现图像的分割。实验结果表明,该方法对医学超声图像的分割是有效的。此算法通过求解稀疏的、对称的、正定的线性方程的系统来获得Dirichlet问题的解,使计算速度大为提高。  相似文献   

11.
In this paper, we propose a simple randomized algorithm for the NAE2SAT problem. The analysis of the algorithm uses the theory of symmetric, absorbing random walks. Inasmuch as the NAE2SAT problem is in the complexity class L (Deterministic Logarithmic Space) and L ? P ? RP, the existence of such an algorithm is not surprising. However, the simplicity of our approach and the insights revealed by its analysis make the current study worthwhile. Not-all-equal SAT (NAESAT) is the variant of the satisfiability problem (SAT), in which we are interested in an assignment that satisfies all the clauses, but falsifies at least one literal in each clause. NAESAT is an interesting SAT variant in its own right and has been studied by SAT theoreticians, on account of its connections to the graph colouring problem. It is well-known that the variant of NAESAT with exactly three literals per clause (NAE3SAT) is NP-complete [M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman Company, San Francisco, 1979]. Likewise, there seem to be a number of polynomial time algorithms for NAE2SAT as part of algorithm design folklore [D.S. Johnson, Personal Communication]. Here we show that the NAE2SAT problem admits an extremely simple, literal-flipping algorithm, in precisely the same way that 2SAT does. On a satisfiable instance involving n variables, the probability that our algorithm does not find a satisfying assignment is at most 1/24. The randomized algorithm takes O(1) extra space, in the presence of a verifier and provides an interesting insight into checking whether a graph is bipartite. It must be noted that space-parsimonious randomized algorithms for a problem, such as the one proposed in this paper, invariably lead to error–bounded, online algorithms for the same. As part of our analysis, we argue that a restricted variant of NAE2SAT is L-complete. We note that the bounds derived in this paper are sharper than the ones in Papadimitriou [C.H. Papadimitriou, On selecting a satisfying truth assignment, in Proceedings of the 32nd Annual Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 1–4 October 1991, IEEE, ed., IEEE, Washington, DC, 1991, pp. 163–169].  相似文献   

12.
The flat histogram version of pruned and enriched Rosenbluth method (FLATPERM) is very efficient for calculating densities of classical states of polymers on a lattice. In this paper the accuracy of this method is tested by comparing simulations to an exact solution of distribution of random walks on a one-dimensional lattice. The boundary effects of restricting the region of parameter space are investigated as well. Finally by FLATPERM we calculate the distribution of end-to-end distance of self-avoiding walks on a cubic lattice and simulate the scaling behavior of most probable end-to-end distance.  相似文献   

13.
Wang  Zhaobin  Wang  Shuai  Guo  Lijie 《Neural computing & applications》2018,29(11):1101-1114

The purpose of multi-focus image fusion is to acquire an image where all the objects are focused by fusing the source images which have different focus points. A novel multi-focus image fusion method is proposed in this paper, which is based on PCNN and random walks. PCNN is consistent with people’s visual perception. And the random walks model has been proven to have enormous potential to fuse image in recent years. The proposed method first employs PCNN to measure the sharpness of source images. Then, an original fusion map is constructed. Next, the method of random walks is employed to improve the accuracy of the fused regions detection. Finally, the fused image is generated according to the probability computed by random walks. The experiments demonstrate that our method outperforms many existing methods of multi-focus image fusion in visual perception and objective criteria. To assess the performance of our method in practical application, some examples are given at the end of paper.

  相似文献   

14.
Exact and approximate graph matching using random walks   总被引:3,自引:0,他引:3  
In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.  相似文献   

15.
16.
In this article, we address the problem of counting the number of peers in a peer-to-peer system. This functionality has proven useful in the design of several peer-to-peer applications. However, it is delicate to achieve when nodes are organised in an overlay network, and each node has only a limited, local knowledge of the whole system. In this paper, we propose a generic technique, called the Sample&Collide method, to solve this problem. It relies on a sampling sub-routine which returns randomly chosen peers. Such a sampling sub-routine is of independent interest. It can be used for instance for neighbour selection by new nodes joining the system. We use a continuous time random walk to obtain such samples. The core of the method consists in gathering random samples until a target number of redundant samples are obtained. This method is inspired by the “birthday paradox” technique of Bawa et al. (Estimating aggregates on a peer-to-peer network, Technical Report, Department of Computer Science, Stanford University), upon which it improves by achieving a target variance with fewer samples. We analyse the complexity and accuracy of the proposed method. We illustrate in particular how expansion properties of the overlay affect its performance. We use simulations to evaluate its performance in both static and dynamic environments with sudden changes in peer populations, and verify that it tracks varying system sizes accurately.  相似文献   

17.
Accurate mass segmentation on mammograms is a critical step in computer-aided diagnosis (CAD) systems. It is also a challenging task since some of the mass lesions are embedded in normal tissues and possess poor contrast or ambiguous margins. Besides, the shapes and densities of masses in mammograms are various. In this paper, a hybrid method combining a random walks algorithm and Chan-Vese (CV) active contour is proposed for automatic mass segmentation on mammograms. The data set used in this study consists of 1095 mass regions of interest (ROIs). First, the original ROI is preprocessed to suppress noise and surrounding tissues. Based on the preprocessed ROI, a set of seed points is generated for initial random walks segmentation. Afterward, an initial contour of mass and two probability matrices are produced by the initial random walks segmentation. These two probability matrices are used to modify the energy function of the CV model for prevention of contour leaking. Lastly, the final segmentation result is derived by the modified CV model, during which the probability matrices are updated by inserting several rounds of random walks. The proposed method is tested and compared with other four methods. The segmentation results are evaluated based on four evaluation metrics. Experimental results indicate that the proposed method produces more accurate mass segmentation results than the other four methods.  相似文献   

18.
Random walks are gaining much attention from the networks research community. They are the basis of many proposals aimed to solve a variety of network-related problems such as resource location, network construction, nodes sampling, etc. This interest on random walks is justified by their inherent properties. They are very simple to implement as nodes only require local information to take routing decisions. Also, random walks demand little processing power and bandwidth. Besides, they are very resilient to changes on the network topology.Here, we quantify the effectiveness of independent random walks (i.e, random walks that have statistical properties identical to the random sampling) as a search mechanism in one-hop replication networks: networks where each node knows its neighbors’ identity/resources, and so it can reply to queries on their behalf. Our model focuses on estimating the expected average search time of the random walk by applying network queuing theory. To do this, we must provide first the expected average search length. This is computed by means of estimations of the expected average coverage at each step of the random walk for all random walks in all random networks with a given degree distribution. This model takes into account the revisiting effect: the fact that, as the random walk progresses, the probability of arriving to nodes already visited increases, which impacts on how the network coverage evolves. That is, we do not model the coverage as a memoryless process. Furthermore, we conduct a series of simulations to evaluate, in practice, the above mentioned metrics. Our results show a very close correlation between the analytical and the experimental results.  相似文献   

19.
In the era of globalization, traditional theories and models of social systems are shifting their focus from isolation and independence to networks and connectedness. Analyzing these new complex social models is a growing, and computationally demanding area of research. In this study, we investigate the integration of genetic algorithms (GAs) with a random-walk-based distance measure to find subgroups in social networks. We test our approach by synthetically generating realistic social network data sets. Our clustering experiments using random-walk-based distances reveal exceptionally accurate results compared with the experiments using Euclidean distances.  相似文献   

20.
This paper details an efficient algorithm for particles undergoing random walks in the presence of complex, two-dimensional, solid boundaries. The scheme is developed for the simulation of vortex diffusion using the random vortex method. Both vortex blobs and sheets are handled. The relevant modifications for using the algorithm with the vorticity redistribution technique are also discussed. The algorithm is designed to be used in the framework of an existing fast multipole implementation. A measure for the geometric complexity of a body is introduced and the algorithm’s efficiency is studied as various parameters are changed for bodies of varying complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号