首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 44 毫秒
1.
Topology preservation is a major concern of parallel thinning algorithms for 2D and 3D binary images. To prove that a parallel thinning algorithm preserves topology, one must show that it preserves topology for all possible images. But it would be difficult to check all images, since there are too many possible images. Efficient sufficient conditions which can simplify such proofs for the 2D case were proposed by Ronse [Discrete Appl. Math. 21, 1988, 69-79]. By Ronse′s results, a 2D parallel thinning algorithm can be proved to be topology preserving by checking a rather small number of configurations. This paper establishes sufficient conditions for 3D parallel thinning algorithms to preserve topology.  相似文献   

2.
The improvement of producing skeletons that preserve the significant geometric features of patterns is of great importance. One of feasible approaches is to develop a method embedded in a known parallel algorithm to produce bias-reduced skeletons since a bias skeleton usually degrades the preservation of significant geometric features of patterns. From our observations, bias skeletons always appear in the junction of lines which form an angle less than or near equal to 90°. In this paper, thehidden deletable pixel(HDP) which influences the speed of deleting the boundary pixels on a concave side is newly defined. Based on the comparable performance of our pseudo 1-subcycle parallel thinning algorithm (CH), a reduced form of larger support (twoL-pixel vectors, which is not the form ofk×ksupport) operated by an intermediate vector analysis about the deleted pixels in each thinning iteration is developed for HDP detection to obtain bias-reduced skeletons, which can be purchased by a reasonable computation cost. HDP restoration and parallel implementation are further considered to formulate an improved algorithm (CYS), where the connectivity preservation is guaranteed by the use ofCH's operators and HDP restoration. A set of synthetic images are used to quantify the skeleton from the geometry viewpoint and investigate the skeleton variations of using different (L)s. Based on the analyzing results, 3 ≤L≤ 9 are suggested for the current algorithm ofCYS.CYSis evaluated in comparison with two small support algorithms (AFP3andCH) and one larger support algorithm (VRCT) using the same patterns. Performances are reported by the number of iterations (NI), CPU time (TC), and number of unmatched pixels (Nunmatchfor bias-reduced measure). Results show that on the measure ofTC,CYSis approximately 2 to 3 times slower than the others, while on the measure ofNI, the four algorithms have approximately identical performance. On the measure ofNunmatch,CYSis approximately 2 to 3 times less than the others. One-pixel boundary noise is also considered for exploring the noise immunity. The results suggest that the noise immunity ofCYSandCHis identical and is better than that ofAFP3. As a result, the better bias-reduced skeletons produced byCYSmay be purchased by a reasonable computation cost.  相似文献   

3.
The notion of P-simple points was introduced by Bertrand to conceive parallel thinning algorithms. In ‘A 3D fully parallel thinning algorithm for generating medial faces’ (Pattern Recogn. Lett. 16:83–87, 1995), Ma proposed an algorithm for which there are objects whose topology is not preserved. In this paper, we propose a new application of P-simple points: to automatically correct Ma’s algorithm.  相似文献   

4.
In this work we present a new thinning scheme for reducing the noise sensitivity of 3D thinning algorithms. It uses iteration-by-iteration smoothing that removes some border points that are considered as extremities. The proposed smoothing algorithm is composed of two parallel topology preserving reduction operators. An efficient implementation of our algorithm is sketched and its topological correctness for (26, 6) pictures is proved.  相似文献   

5.
This paper discusses thinning on 3D binary images with the 4-subfield approach. Although a thinning algorithm concerns binary images, the algorithm itself can be represented as a set of three-color reduction templates. A thinning algorithm is topology preserving if the set of all three-color templates is topology preserving. Sufficient and necessary conditions of time complexity O(n) were proposed for verifying the topological soundness of a 3D 4-subfield thinning algorithm of n three-color templates. Theories and techniques for computerizing such conditions were discussed. Two 4-subfield thinning algorithms on 3D images, one for generating medial curves, and the other one for generating medial surfaces, are proposed and proved to preserve topology by our sufficient and necessary conditions.  相似文献   

6.
Two-Dimensional Parallel Thinning Algorithms Based on Critical Kernels   总被引:1,自引:1,他引:0  
Critical kernels constitute a general framework in the category of abstract complexes for the study of parallel thinning in any dimension. The most fundamental result in this framework is that, if a subset Y of X contains the critical kernel of X, then Y is guaranteed to have “the same topology as X”. Here, we focus on 2D structures in spaces of two and three dimensions. We introduce the notion of crucial pixel, which permits to link this work with the framework of digital topology. We prove simple local characterizations, which allow us to express thinning algorithms by way of sets of masks. We propose several new parallel algorithms, which are both fast and simple to implement, that yield symmetrical or non-symmetrical skeletons of 2D objects in 2D or 3D grids. We prove some properties of these skeletons, related to topology preservation, to minimality, and to the inclusion of the topological axis. The latter may be seen as a generalization of the medial axis. We also show how to use critical kernels in order to provide simple proofs of the topological soundness of existing thinning schemes. Finally, we clarify the link between critical kernels, minimal non-simple sets, and P-simple points.
M. CouprieEmail:
  相似文献   

7.
In this paper, we present an efficient three-dimensional (3-D) parallel thinning algorithm for extracting both the medial surfaces and the medial axes of a 3-D object (given as a 3-D binary image). A new Euler table is derived to ensure the invariance of the Euler characteristic of the object, during thinning. An octree data structure of 3 × 3 × 3 lattice points is built to examine the local connectivity. The sets of "simple" points found by different researchers are compared with the constructed set. Different definitions of "surface" points including ours are given. By preserving the topological and the geometrical conditions, our algorithm produces desirable skeletons and performs better than others in terms of noise sensitivity and speed. Pre- and postprocessors can be used to remove additional noise spurs. Its use in defect analysis of objects produced by casting and forging is discussed.  相似文献   

8.
In 1996, Ma and Sonka proposed a thinning algorithm which yields curve skeletons for 3D binary images [C. Ma, M. Sonka, A fully parallel 3D thinning algorithm and its applications, Comput. Vis. Image Underst. 64 (3) (1996) 420–433]. This algorithm is one of the most referred thinning algorithms in the context of digital topology: either by its use in medical applications or for comparisons with other thinning algorithms.In 2007, Wang and Basu [T. Wang, A. Basu, A note on ‘a fully parallel 3D thinning algorithm and its applications’, Pattern Recognit. Lett. 28 (4) (2007) 501–506] wrote a paper in which they claim that Ma and Sonka’s 3D thinning algorithm does not preserve topology. As they highlight in their paper, a counter-example was given in 2001, in Lohou’s thesis [C. Lohou, Contribution à l’analyse topologique des images: étude d’algorithmes de squelettisation pour images 2D et 3D selon une approche topologie digitale ou topologie discrète. Ph.D. thesis, University of Marne-la-Vallée, France, 2001].In this paper, it is shown how P-simple points have guided the author towards a proof that Ma and Sonka’s algorithm does not always preserve topology. Moreover, the reasoning being very general, it could be reused for such a purpose, i.e., to simplify the proof on the non-topology preservation.  相似文献   

9.
Many economic models are completed by finding a parameter vector θ that optimizes a function f(θ), a task that can only be accomplished by iterating from a starting vector θ0. Use of a generic iterative optimizer to carry out this task can waste enormous amounts of computation when applied to a class of problems defined here as finite mixture models. The finite mixture class is large and important in economics and eliminating wasted computations requires only limited changes to standard code. Further, the approach described here greatly increases gains from parallel execution and opens possibilities for re-writing objective functions to make further efficiency gains.Documented code that implements the algorithm described is available from the author for objectives written in C and other languages. It runs in both serial and parallel mode using the MPI library.JEL Classification: C61; C63; D58  相似文献   

10.
Binary Image Thinning Using Autowaves Generated by PCNN   总被引:2,自引:0,他引:2  
This paper proposes a novel binary image thinning algorithm by using the autowaves generated by Pulse Coupled Neural Network (PCNN). Once the autowaves travelling in different directions meet, the PCNN delivers the thinning results. Four meeting conditions are given for autowaves meeting. If a neuron satisfies one of the four conditions, the pixel corresponding to this neuron belongs to the thinning result. Moreover, the specification of the PCNNs parameters is given, which makes the implementation of the proposed thinning algorithm easy. Experimental results show that the proposed algorithm is efficient in extracting the skeleton of images (such as Chinese characters, alphabet letters, numbers, fingerprints, etc.). Finally, a rate called “R MSkel” is given to evaluate the performance of different thinning algorithms, and comparisons show that the proposed algorithm has higher “R MSkel” and costs less time.  相似文献   

11.
Several sequential approximation algorithms for combinatorial optimization problems are based on the following paradigm: solve a linear or semidefinite programming relaxation, then use randomized rounding to convert fractional solutions of the relaxation into integer solutions for the original combinatorial problem. We demonstrate that such a paradigm can also yield parallel approximation algorithms by showing how to convert certain linear programming relaxations into essentially equivalent positive linear programming [LN] relaxations that can be near-optimally solved in NC. Building on this technique, and finding some new linear programming relaxations, we develop improved parallel approximation algorithms for Max Sat, Max Directed Cut, and Max k CSP. The Max Sat algorithm essentially matches the best approximation obtainable with sequential algorithms and has a fast sequential version. The Max k CSP algorithm improves even over previous sequential algorithms. We also show a connection between probabilistic proof checking and a restricted version of Max k CSP. This implies that our approximation algorithm for Max k CSP can be used to prove inclusion in P for certain PCP classes. Received November 1996; revised March 1997.  相似文献   

12.
Generally, a reduction operation (e.g., thinning and shrinking) on 3D binary images can be represented as a set of reduction templates where every object voxel of the image satisfying any template is turned to a background voxel. Generally, it is rather difficult, error-prone and time-consuming for verifying the topological soundness of a 3D parallel reduction operation. This paper proposes sufficient conditions of time complexity O(n) for verifying the topological soundness of 3D parallel 6-subiteration reduction operations of n templates where such a kind of 3D reduction operations is performed alternatively from the six orthogonal directions to turn object voxels to background voxels. By such sufficient conditions, the topology soundness of a 3D 6-subiteration parallel reduction operation can be verified by checking each and every of its templates.  相似文献   

13.
The implicit QR algorithm is a serial iterative algorithm for determining all the eigenvalues of an n × n symmetric tridiagonal matrix A. About 3n iterations, each requiring the serial application of about n similarity planar transformations, are required to reduce A to diagonal form. In this paper we propose a parallel algorithm in which up to n/2 similarity transformations can be applied simultaneously. In contrast to the original algorithm, which cannot take advantage of the architectures of parallel or vector machines, each iteration of the new algorithm mainly involves synchronous, lock-step operations which can effectively use vector and concurrency capabilities of SIMD machines. In practice we have observed that the number of iterations of the parallel algorithm is about three times that of the serial algorithm, but because many of the operations can be done in parallel, the total computation time is less. On a two-processor Cray-XMP we often observe a factor of 3 speedup over the standard QR algorithm for problems with n = 800.  相似文献   

14.
目的 目前,点云、栅格格网及不规则三角网等建筑物检测中常用的离散机载激光雷达(LIDAR)点云数据表达方式存在模型表达复杂、算法开发困难、结果表达不准确及难以表达多返回数据等缺点。为此,针对LIDAR点云体元结构模型构建及在此基础上的建筑物检测展开研究,提出一种基于体元的建筑物检测算法。方法 首先将点云数据规则化为二值(即1、0值,分别表示体元中是否包含有激光点)3D体元结构。然后利用3D滤波算法将上述体元结构中表征数据点的体元分类为地面和非地面体元。最后,依据建筑物边缘的接近直线、跳变特性从非地面体元中搜寻建筑物边缘作为种子体元进而标记与其3D连通的非地面体元集合为建筑物体元。结果 实验基于ISPRS(international society for photogrammetry and remote sensing)提供的包含了不同的建筑物类型的城区LIDAR点云数据测试了"邻域尺度"参数的敏感性及提出算法的精度。定量评价的结果表明:56邻域为最佳邻域尺度;建筑物的检测质量可达到95%以上——平均完整度可达到95.61%、平均正确率可达95.97%。定性评价的结果表明:对大型、密集、不规则形状、高低混合及其他屋顶类型比较特殊的复杂建筑物均可成功检测。结论 本文提出的建筑物检测算法采用基于体元空间邻域关系的搜索标记方式,可有效实现对各类建筑目标特别是城市建筑目标的检测,检测结果易于建模3D建筑物模型。  相似文献   

15.
Shared counters are among the most basic coordination structures in distributed computing. Known implementations of shared counters are either blocking, non-linearizable, or have a sequential bottleneck. We present the first counter algorithm that is both linearizable, non-blocking, and can provably achieve high throughput in k-synchronous executions—executions in which process speeds vary by at most a constant factor k. The algorithm is based on a novel variation of the software combining paradigm that we call bounded-wait combining (BWC). It can thus be used to obtain implementations, possessing the same properties, of any object that supports combinable operations, such as a stack or a queue. Unlike previous combining algorithms where processes may have to wait for each other indefinitely, in the BWC algorithm, a process only waits for other processes for a bounded period of time and then “takes destiny in its own hands”. In order to reason rigorously about the parallelism attainable by our algorithm, we define a novel metric for measuring the throughput of shared objects, which we believe is interesting in its own right. We use this metric to prove that our algorithm achieves throughput of Ω(N/ log N) in k-synchronous executions, where N is the number of processes that can participate in the algorithm. Our algorithm uses two tools that we believe may prove useful for obtaining highly parallel non-blocking implementation of additional objects. The first are “synchronous locks”, locks that are respected by processes only in k-synchronous executions and are disregarded otherwise; the second are “pseduo-transactions”—a weakening of regular transactions that allows higher parallelism. A preliminary version of this paper appeared in [11]. D. Hendler is supported in part by a grant from the Israel Science Foundation. S. Kutten is supported in part by a grant from the Israel Science Foundation.  相似文献   

16.
17.
This paper considers four parallel Cholesky factorization algorithms, including SPOTRF from the February 1992 release of LAPACK, each of which call parallel Level 2 or 3 BLAS, or both. A fifth parallel Cholesky algorithm that calls serial Level 3 BLAS is also described. The efficiency of these five algorithms on the CRAY-2, CRAY Y-MP/832, Hitachi Data Systems EX 80, and IBM 3090-600J is evaluated and compared with a vendor-optimized parallel Cholesky factorization algorithm. The fifth parallel Cholesky algorithm that calls serial Level 3 BLAS provided the best performance of all algorithms that called BLAS routines. In fact, this algorithm outperformed the Cray-optimized libsci routine (SPOTRF) by 13–44%;, depending on the problem size and the number of processors used.This work was supported by grants from IMSL, Inc., and Hitachi Data Systems. The first version of this paper was presented as a poster session at Supercomputing '90, New York City, November 1990.  相似文献   

18.
On approximating the longest path in a graph   总被引:6,自引:0,他引:6  
We consider the problem of approximating the longest path in undirected graphs. In an attempt to pin down the best achievable performance ratio of an approximation algorithm for this problem, we present both positive and negative results. First, a simple greedy algorithm is shown to find long paths in dense graphs. We then consider the problem of finding paths in graphs that are guaranteed to have extremely long paths. We devise an algorithm that finds paths of a logarithmic length in Hamiltonian graphs. This algorithm works for a much larger class of graphs (weakly Hamiltonian), where the result is the best possible. Since the hard case appears to be that of sparse graphs, we also consider sparse random graphs. Here we show that a relatively long path can be obtained, thereby partially answering an open problem of Broderet al. To explain the difficulty of obtaining better approximations, we also prove hardness results. We show that, for any ε<1, the problem of finding a path of lengthn-n ε in ann-vertex Hamiltonian graph isNP-hard. We then show that no polynomial-time algorithm can find a constant factor approximation to the longest-path problem unlessP=NP. We conjecture that the result can be strengthened to say that, for some constant δ>0, finding an approximation of ration δ is alsoNP-hard. As evidence toward this conjecture, we show that if any polynomial-time algorithm can approximate the longest path to a ratio of , for any ε>0, thenNP has a quasi-polynomial deterministic time simulation. The hardness results apply even to the special case where the input consists of bounded degree graphs. D. Karger was supported by an NSF Graduate Fellowship, NSF Grant CCR-9010517, and grants from the Mitsubishi Corporation and OTL. R. Motwani was supported by an Alfred P. Sloan Research Fellowship, an IBM Faculty Development Award, grants from Mitsubishi and OTL, NSF Grant CCR-9010517, and NSF Young Investigator Award CCR-9357849, with matching funds from IBM, the Schlumberger Foundation, the Shell Foundation, and the Xerox Corporation, G. D. S. Ramkumar was supported by a grant from the Toshiba Corporation. Communicated by M. X. Goemans.  相似文献   

19.
In this paper,a sequential algorithm computing the aww vertex pair distance matrix D and the path matrix Pis given.On a PRAM EREW model with p,1≤p≤n^2,processors,a parallel version of the sequential algorithm is shown.This method can also be used to get a parallel algorithm to compute transitive closure array A^* of an undirected graph.The time complexity of the parallel algorithm is O(n^3/p).If D,P and A^* are known,it is shown that the problems to find all connected components,to compute the diameter of an undirected graph,to determine the center of a directed graph and to search for a directed cycle with the minimum(maximum)length in a directed graph can all be solved in O(n^2/p logp)time.  相似文献   

20.
The circuit value update problem is the problem of updating values in a representation of a combinational circuit when some of the inputs are changed. We assume for simplicity that each combinational element has bounded fan-in and fan-out and can be evaluated in constant time. This problem is easily solved on an ordinary serial computer in O(W+D) time, where W is the number of elements in the altered subcircuit and D is the subcircuit's embedded depth (its depth measured in the original circuit). In this paper we show how to solve the circuit value update problem efficiently on a P-processor parallel computer. We give a straightforward synchronous, parallel algorithm that runs in expected time. Our main contribution, however, is an optimistic, asynchronous, parallel algorithm that runs in expected time, where W and D are the size and embedded depth, respectively, of the ``volatile' subcircuit, the subcircuit of elements that have inputs which either change or glitch as a result of the update. To our knowledge, our analysis provides the first analytical bounds on the running time of an optimistic, asynchronous, parallel algorithm. Received November 1, 1995, and in final form November 25, 1996.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号