首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
inverse subdivision algorithms , with linear time and space complexity, to detect and reconstruct uniform Loop, Catmull–Clark, and Doo–Sabin subdivision structure in irregular triangular, quadrilateral, and polygonal meshes. We consider two main applications for these algorithms. The first one is to enable interactive modeling systems that support uniform subdivision surfaces to use popular interchange file formats which do not preserve the subdivision structure, such as VRML, without loss of information. The second application is to improve the compression efficiency of existing lossless connectivity compression schemes, by optimally compressing meshes with Loop subdivision connectivity. Our Loop inverse subdivision algorithm is based on global connectivity properties of the covering mesh, a concept motivated by the covering surface from Algebraic Topology. Although the same approach can be used for other subdivision schemes, such as Catmull–Clark, we present a Catmull–Clark inverse subdivision algorithm based on a much simpler graph-coloring algorithm and a Doo–Sabin inverse subdivision algorithm based on properties of the dual mesh. Straightforward extensions of these approaches to other popular uniform subdivision schemes are also discussed. Published online: 3 July 2002  相似文献   

2.
3.
We present a novel technique for the efficient boundary evaluation of sweep operations applied to objects in polygonal boundary representation. These sweep operations include Minkowski addition, offsetting, and sweeping along a discrete rigid motion trajectory. Many previous methods focus on the construction of a polygonal superset (containing self‐intersections and spurious internal geometry) of the boundary of the volumes which are swept. Only few are able to determine a clean representation of the actual boundary, most of them in a discrete volumetric setting. We unify such superset constructions into a succinct common formulation and present a technique for the robust extraction of a polygonal mesh representing the outer boundary, i.e. it makes no general position assumptions and always yields a manifold, watertight mesh. It is exact for Minkowski sums and approximates swept volumes polygonally. By using plane‐based geometry in conjunction with hierarchical arrangement computations we avoid the necessity of arbitrary precision arithmetics and extensive special case handling. By restricting operations to regions containing pieces of the boundary, we significantly enhance the performance of the algorithm.  相似文献   

4.
P so that each point of P is seen by at least one guard. We introduce and explore the edge-covering problem; the guards are required to observe the edges of P; metaphorically the paintings on the walls of the art gallery, and not necessarily every interior point. We compare minimum edge and interior covers for a given polygon and analyze the bounds and complexity for the edge-covering problem. We also introduce and analyze a restricted edge covering problem, where full visibility of each edge from at least one guard is required. For this problem we present an algorithm that computes a set of regions where a minimum set of guards must be located. The algorithm can also deal with the external visibility of a set of polygons.  相似文献   

5.
In this paper, we introduce the concept of extended feature objects for similarity retrieval. Conventional approaches for similarity search in databases map each object in the database to a point in some high-dimensional feature space and define similarity as some distance measure in this space. For many similarity search problems, this feature-based approach is not sufficient. When retrieving partially similar polygons, for example, the search cannot be restricted to edge sequences, since similar polygon sections may start and end anywhere on the edges of the polygons. In general, inherently continuous problems such as the partial similarity search cannot be solved by using point objects in feature space. In our solution, we therefore introduce extended feature objects consisting of an infinite set of feature points. For an efficient storage and retrieval of the extended feature objects, we determine the minimal bounding boxes of the feature objects in multidimensional space and store these boxes using a spatial access structure. In our concrete polygon problem, sets of polygon sections are mapped to 2D feature objects in high-dimensional space which are then approximated by minimal bounding boxes and stored in an R-tree. The selectivity of the index is improved by using an adaptive decomposition of very large feature objects and a dynamic joining of small feature objects. For the polygon problem, translation, rotation, and scaling invariance is achieved by using the Fourier-transformed curvature of the normalized polygon sections. In contrast to vertex-based algorithms, our algorithm guarantees that no false dismissals may occur and additionally provides fast search times for realistic database sizes. We evaluate our method using real polygon data of a supplier for the car manufacturing industry. Edited by R. Güting. Received October 7, 1996 / Accepted March 28, 1997  相似文献   

6.
目的 多边形等距是计算机图形学、计算几何、计算机辅助几何设计领域的一个基础性问题,并且有着广泛的应用。为了有效地处理各种类型的多边形等距问题,提出一种基于像素的多边形等距区域子分算法。方法 利用四叉树数据结构对给定区域进行子分,再利用区间算术计算出符合等距要求的全体像素集。针对只是由线段组成的多边形采用点到线段的最短距离算子加快计算速度。结果 利用区域子分算法处理了不同类型的多边形等距问题,并与传统的基于像素的多边形等距膨胀算法进行了比较。本文算法能有效处理各种多边形的等距问题,相对于传统的基于像素的膨胀算法,在顶点处的处理效果上更好,并且耗时也更短。所提区域子分算法比传统边等距方法适用范围更广,能够有效地处理一些边等距算法不能处理的多边形等距问题。结论 本文算法其优点是不需要考虑自交和连接问题,并且可以处理其他许多常规方法处理不了的各种类型的多边形等距问题,包括带有弧段和孤岛的情况。  相似文献   

7.
In this paper, we present a new approach to extract characters on a license plate of a moving vehicle, given a sequence of perspective-distortion-corrected license plate images. Different from many existing single-frame approaches, our method simultaneously utilizes spatial and temporal information. We first model the extraction of characters as a Markov random field (MRF), where the randomness is used to describe the uncertainty in pixel label assignment. With the MRF modeling, the extraction of characters is formulated as the problem of maximizing a posteriori probability based on a given prior knowledge and observations. A genetic algorithm with local greedy mutation operator is employed to optimize the objective function. Experiments and comparison study were conducted and some of our experimental results are presented in the paper. It is shown that our approach provides better performance than other single frame methods. Received: 13 August 1997 / Accepted: 7 October 1997  相似文献   

8.
Abstract. For some multimedia applications, it has been found that domain objects cannot be represented as feature vectors in a multidimensional space. Instead, pair-wise distances between data objects are the only input. To support content-based retrieval, one approach maps each object to a k-dimensional (k-d) point and tries to preserve the distances among the points. Then, existing spatial access index methods such as the R-trees and KD-trees can support fast searching on the resulting k-d points. However, information loss is inevitable with such an approach since the distances between data objects can only be preserved to a certain extent. Here we investigate the use of a distance-based indexing method. In particular, we apply the vantage point tree (vp-tree) method. There are two important problems for the vp-tree method that warrant further investigation, the n-nearest neighbors search and the updating mechanisms. We study an n-nearest neighbors search algorithm for the vp-tree, which is shown by experiments to scale up well with the size of the dataset and the desired number of nearest neighbors, n. Experiments also show that the searching in the vp-tree is more efficient than that for the -tree and the M-tree. Next, we propose solutions for the update problem for the vp-tree, and show by experiments that the algorithms are efficient and effective. Finally, we investigate the problem of selecting vantage-point, propose a few alternative methods, and study their impact on the number of distance computation. Received June 9, 1998 / Accepted January 31, 2000  相似文献   

9.
On fast microscopic browsing of MPEG-compressed video   总被引:1,自引:0,他引:1  
MPEG has been established as a compression standard for efficient storage and transmission of digital video. However, users are limited to VCR-like (and tedious) functionalities when viewing MPEG video. The usefulness of MPEG video is presently limited by the lack of tools available for fast browsing, manipulation and processing of MPEG video. In this paper, we first address the problem of rapid access to individual shots and frames in MPEG video. We build upon the compressed-video-processing framework proposed in [1, 8], and propose new and fast algorithms based on an adaptive mixture of approximation techniques for extracting spatially reduced image sequence of uniform quality from MPEG video across different frame types and also under different motion activities in the scenes. The algorithms execute faster than real time on a Pentium personal computer. We demonstrate how the reduced images facilitate fast and convenient shot- and frame-level video browsing and access, shot-level editing and annotation, without the need for frequent decompression of MPEG video. We further propose methods for reducing the auxiliary data size associated with the reduced images through exploitation of spatial and temporal redundancy. We also address how the reduced images lead to computationally efficient algorithms for video analysis based on intra- and inter-shot processing for video database and browsing applications. The algorithms, tools for browsing and techniques for video processing presented in this paper have been used by many in IBM Research on more than 30 h of MPEG-1 video for video browsing and analysis.  相似文献   

10.
Some significant progress related to multidimensional data analysis has been achieved in the past few years, including the design of fast algorithms for computing datacubes, selecting some precomputed group-bys to materialize, and designing efficient storage structures for multidimensional data. However, little work has been carried out on multidimensional query optimization issues. Particularly the response time (or evaluation cost) for answering several related dimensional queries simultaneously is crucial to the OLAP applications. Recently, Zhao et al. first exploited this problem by presenting three heuristic algorithms. In this paper we first consider in detail two cases of the problem in which all the queries are either hash-based star joins or index-based star joins only. In the case of the hash-based star join, we devise a polynomial approximation algorithm which delivers a plan whose evaluation cost is $ O(n^{\epsilon }$) times the optimal, where n is the number of queries and is a fixed constant with . We also present an exponential algorithm which delivers a plan with the optimal evaluation cost. In the case of the index-based star join, we present a heuristic algorithm which delivers a plan whose evaluation cost is n times the optimal, and an exponential algorithm which delivers a plan with the optimal evaluation cost. We then consider a general case in which both hash-based star-join and index-based star-join queries are included. For this case, we give a possible improvement on the work of Zhao et al., based on an analysis of their solutions. We also develop another heuristic and an exact algorithm for the problem. We finally conduct a performance study by implementing our algorithms. The experimental results demonstrate that the solutions delivered for the restricted cases are always within two times of the optimal, which confirms our theoretical upper bounds. Actually these experiments produce much better results than our theoretical estimates. To the best of our knowledge, this is the only development of polynomial algorithms for the first two cases which are able to deliver plans with deterministic performance guarantees in terms of the qualities of the plans generated. The previous approaches including that of [ZDNS98] may generate a feasible plan for the problem in these two cases, but they do not provide any performance guarantee, i.e., the plans generated by their algorithms can be arbitrarily far from the optimal one. Received: July 21, 1998 / Accepted: August 26, 1999  相似文献   

11.
Tool-path generation from measured data   总被引:4,自引:0,他引:4  
Presented in the paper is a procedure through which 3-axis NC tool-paths (for roughing and finishing) can be directly generated from measured data (a set of point sequence curves). The rough machining is performed by machining volumes of material in a slice-by-slice manner. To generate the roughing tool-path, it is essential to extract the machining regions (contour curves and their inclusion relationships) from each slice. For the machining region extraction, we employ the boundary extraction algorithm suggested by Park and Choi (Comput.-Aided Des. 33 (2001) 571). By making use of the boundary extraction algorithm, it is possible to extract the machining regions with O(n) time complexity, where n is the number of runs. The finishing tool-path can be obtained by defining a series of curves on the CL (cutter location) surface. However, calculating the CL-surface of the measured data involves time-consuming computations, such as swept volume modeling of an inverse tool and Boolean operations between polygonal volumes. To avoid these computational difficulties, we develop an algorithm to calculate the finishing tool-path based on well-known 2D geometric algorithms, such as 2D curve offsetting and polygonal chain intersection algorithms.  相似文献   

12.
We consider the problem of scheduling a set of pages on a single broadcast channel using time-multiplexing. In a perfectly periodic schedule, time is divided into equal size slots, and each page is transmitted in a time slot precisely every fixed interval of time (the period of the page). We study the case in which each page i has a given demand probability , and the goal is to design a perfectly periodic schedule that minimizes the average time a random client waits until its page is transmitted. We seek approximate polynomial solutions. Approximation bounds are obtained by comparing the costs of a solution provided by an algorithm and a solution to a relaxed (non-integral) version of the problem. A key quantity in our methodology is a fraction we denote by , that depends on the maximum demand probability: . The best known polynomial algorithm to date guarantees an approximation of . In this paper, we develop a tree-based methodology for perfectly periodic scheduling, and using new techniques, we derive algorithms with better bounds. For small values, our best algorithm guarantees approximation of . On the other hand, we show that the integrality gap between the cost of any perfectly periodic schedule and the cost of the fractional problem is at least . We also provide algorithms with good performance guarantees for large values of . Received: December 2001 / Accepted: September 2002  相似文献   

13.
Efficient admission control algorithms for multimedia servers   总被引:3,自引:0,他引:3  
In this paper, we have proposed efficient admission control algorithms for multimedia storage servers that are providers of variable-bit-rate media streams. The proposed schemes are based on a slicing technique and use aggressive methods for admission control. We have developed two types of admission control schemes: Future-Max (FM) and Interval Estimation (IE). The FM algorithm uses the maximum bandwidth requirement of the future to estimate the bandwidth requirement. The IE algorithm defines a class of admission control schemes that use a combination of the maximum and average bandwidths within each interval to estimate the bandwidth requirement of the interval. The performance evaluations done through simulations show that the server utilization is improved by using the FM and IE algorithms. Furthermore, the quality of service is also improved by using the FM and IE algorithms. Several results depicting the trade-off between the implementation complexity, the desired accuracy, the number of accepted requests, and the quality of service are presented.  相似文献   

14.
Abstract. The purpose of this study is to discuss existing fractal-based algorithms and propose novel improvements of these algorithms to identify tumors in brain magnetic-response (MR) images. Considerable research has been pursued on fractal geometry in various aspects of image analysis and pattern recognition. Magnetic-resonance images typically have a degree of noise and randomness associated with the natural random nature of structure. Thus, fractal analysis is appropriate for MR image analysis. For tumor detection, we describe existing fractal-based techniques and propose three modified algorithms using fractal analysis models. For each new method, the brain MR images are divided into a number of pieces. The first method involves thresholding the pixel intensity values; hence, we call the technique piecewise-threshold-box-counting (PTBC) method. For the subsequent methods, the intensity is treated as the third dimension. We implement the improved piecewise-modified-box-counting (PMBC) and piecewise-triangular-prism-surface-area (PTPSA) methods, respectively. With the PTBC method, we find the differences in intensity histogram and fractal dimension between normal and tumor images. Using the PMBC and PTPSA methods, we may detect and locate the tumor in the brain MR images more accurately. Thus, the novel techniques proposed herein offer satisfactory tumor identification. Received: 13 October 2001 / Accepted: 28 May 2002 Correspondence to: K.M. Iftekharuddin  相似文献   

15.
Z -coordinates} of the vertices are completely governed by the Z-coordinates assigned to four selected ones. This allows describing the spatial polygonal mesh with just its 2D projection plus the heights of four vertices. As a consequence, these projections essentially capture the “spatial meaning” of the given surface, in the sense that, whatever spatial interpretations are drawn from them, they all exhibit essentially the same shape. Published online: 5 February 2003  相似文献   

16.
Summary. This work considers the problem of performing t tasks in a distributed system of p fault-prone processors. This problem, called do-all herein, was introduced by Dwork, Halpern and Waarts. The solutions presented here are for the model of computation that abstracts a synchronous message-passing distributed system with processor stop-failures and restarts. We present two new algorithms based on a new aggressive coordination paradigm by which multiple coordinators may be active as the result of failures. The first algorithm is tolerant of stop-failures and does not allow restarts. Its available processor steps (work) complexity is and its message complexity is . Unlike prior solutions, our algorithm uses redundant broadcasts when encountering failures and, for p =t and largef, it achieves better work complexity. This algorithm is used as the basis for another algorithm that tolerates stop-failures and restarts. This new algorithm is the first solution for the do-all problem that efficiently deals with processor restarts. Its available processor steps is , and its message complexity is , wheref is the total number of failures. Received: October 1998 / Accepted: September 2000  相似文献   

17.
BlobTree , and its application to the generation of a complex and visually accurate biological model of the sea shell Murex cabritii. Since the model is purely procedurally defined and does not rely on polygon mesh operations, it is resolution independent and can be rendered directly using ray tracing. An interface has been built for the BlobTree using an interpreted programming language (Python). The language interface readily allows a user to procedurally describe the shell based on numeric data taken from the actual object. Published online: 15 March 2002  相似文献   

18.
基于包容矩形的优化排样算法及实现   总被引:4,自引:0,他引:4  
毛坯优化排样问题是CAD技术结合冲模设计领域的一大课题。论文在多边形顶点算法的基础上,提出了基于包容矩形的优化排样算法。该算法只需在初始毛坯图的包容矩形内进行计算,即可得到排样的步距、料宽等关键参数;在预排样时不用进行传统排样算法所作的等距放大处理,避免了由此引起的图形自交干涉和排样误差增大的问题。并且在Inventor9平台上运用VisualC++对该算法予以实现,开发出效率高、实用性强、运行可靠的冲裁模智能排样系统。  相似文献   

19.
Summary.   Different replication algorithms provide different solutions to the same basic problem. However, there is no precise specification of the problem itself, only of particular classes of solutions, such as active replication and primary-backup. Having a precise specification of the problem would help us better understand the space of possible solutions and possibly come out with new ones. We present a formal definition of the problem solved by replication in the form of a correctness criterion called x-ability (exactly-once ability). An x-able service has obligations to its environment and its clients. It must update its environment under exactly-once semantics. Furthermore, it must provide idempotent, non-blocking request processing and deliver consistent results to its clients. We illustrate the value of x-ability through a novel replication protocol that handles non-determinism and external side-effects. The replication protocol is asynchronous in the sense that it may vary, at run-time and according to the asynchrony of the system, between some form of primary-backup and some form of active replication. Received: December 2000 / Accepted: September 2001  相似文献   

20.
Handling message semantics with Generic Broadcast protocols   总被引:1,自引:0,他引:1  
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,” that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast. Received: August 2000 / Accepted: August 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号