共查询到20条相似文献,搜索用时 0 毫秒
1.
Willem F. Bronsvoort 《The Visual computer》1987,3(4):176-185
An algorithm is presented for display of Constructive Solid Geometry (CSG) models, in which Boolean evaluation of a model is done during image generation only for the visible parts of the model.The algorithm is based on Atherton's CSG scan-line algorithm. It involves, however, dividing the image plane into strips of varying width, inside which areas are determined where only one face is visible. This may involve subdivision of an area into smaller areas.Two versions of the algorithm are presented: an efficient visible-line version for a raster display, and a visible-surface version, which turns out to be an improved variant for simple models of Atherton's algorithm.Sample images and CPU times for some models are given to show the efficiency of the algorithm. 相似文献
2.
The purpose of this research is to develop an experimental environment for solid modelling which can provide a wide range of graphics primitives and operations toward making the modelling process more flexible. The system can handle different types of representations and combine them with an extended-CSG structure. A new geometric editor to enter and modify the data is also reported. To handle different types of representation, the system has an object-oriented structure and the idea of a common function is introduced to give the interface between them. Ray tracing is used as the rendering method. New techniques for generating images for complex objects are shown with several modelling examples. 相似文献
3.
4.
Willem F. Bronsvoort 《Computer aided design》1986,18(10):533-536
Atherton's scan-line hidden surface algorithm for Boolean combinations of plane-faced primitives is intended for fast image generation of complex models.
An important step in the algorithm is the Boolean evaluation at the required positions on each scan-line. This step, however, can be very time consuming if performed in a straightforward way.
Some techniques are presented here that considerably reduce the time needed for the Boolean evaluation. These include a fast bottom-up evaluation technique and partial backface elimination. 相似文献
5.
Wilhelm Oberaigner 《Parallel Computing》1985,2(2):173-182
We propose two parallel algorithms for the rounding exact evaluation of sums of products. The first method transforms products to sums and applies one of the known methods for rounding exact summation in time complexity O(n2) with n processors (n denoting the “length” of the expression). The second method approximates the products as well as the sum and has average time complexity O(ld(n)) for n/2 processors and has average time complexity O(n) viewed as a sequential algorithm. 相似文献
6.
7.
In this note some comments on the paper: D.J. Evans and Shao-wen Mai, Two parallel algorithms for the convex hull problem in a two dimensional space, Parallel Computing 2 (1985) 313–326 are given. 相似文献
8.
Two parallel algorithms for determining the convex hull of a set of data points in two dimensional space are presented. Both are suitable for MIMD parallel systems. The first is based on the strategy of divide-and-conquer, in which some simplest convex-hulls are generated first and then the final convex hull of all points is achieved by the processes of merging 2 sub-convex hulls. The second algorithm is by the process of picking up the points that are necessarily in the convex hull and discarding the points that are definitely not in the convex hull. Experimental results on a MIMD parallel system of 4 processors are analysed and presented. 相似文献
9.
10.
Let P andQ be two convex,n-vertex polygons. We consider the problem of computing, in parallel, some functions ofP andQ whenP andQ are disjoint. The model of parallel computation we consider is the CREW-PRAM, i.e., it is the synchronous shared-memory model where concurrent reads are allowed but no two processors can simultaneously attempt to write in the same memory location (even if they are trying to write the same thing). We show that a CREW-PRAM havingn
1/k processors can compute the following functions in O(k1+) time: (i) the common tangents betweenP andQ, and (ii) the distance betweenP andQ (and hence a straight line separating them). The positive constant can be made arbitrarily close to zero. Even with a linear number of processors, it was not previously known how to achieve constant time performance for computing these functions. The algorithm for problem (ii) is easily modified to detect the case of zero distance as well.This research was supported by the Office of Naval Research under Grants N00014-84-K-0502 and N00014-86-K-0689, and the National Science Foundation under Grant DCR-8451393, with matching funds from AT&T. 相似文献
11.
Advanced application domains such as computer-aided design, computer-aided software engineering, and office automation are characterized by their need to store, retrieve, and manage large quantities of data having complex structures. A number of object-oriented database management systems (OODBMS) are currently available that can effectively capture and process the complex data. The existing implementations of OODBMS outperform relational systems by maintaining and querying cross-references among related objects. However, the existing OODBMS still do not meet the efficiency requirements of advanced applications that require the execution of complex queries involving the retrieval of a large number of data objects and relationships among them. Parallel execution can significantly improve the performance of complex OO queries. In this paper, we analyze the performance of parallel OO query processing algorithms for various benchmark application domains. The application domains are characterized by specific mixes of queries of different semantic complexities. The performance of the application domains has been analyzed for various system and data parameters by running parallel programs on a 32-node transputer based parallel machine developed at the IBM Research Center at Yorktown Heights. The parallel processing algorithms, data routing techniques, and query management and control strategies have been implemented to obtain accurate estimation of controlling and processing overheads. However, generation of large complex databases for the study was impractical. Hence, the data used in the simulation have been parameterized. The parallel OO query processing algorithms analyzed in this study are based on a query graph approach rather than the traditional query tree approach. Using the query graph approach, a query is processed by simultaneously initiating the execution at several object classes, thereby, improving the parallelism. During processing, the algorithms avoid the execution of time-consuming join operations by making use of the object references among the objects. Further, the algorithms do not generate any temporary data, thereby, reducing disk accesses. This is accomplished by marking the selected objects and by employing a two-phase query processing strategy. 相似文献
12.
Elimination Game is a well-known algorithm that simulates Gaussian elimination of matrices on graphs, and it computes a triangulation of the input graph. The number of fill edges in the computed triangulation is highly dependent on the order in which Elimination Game processes the vertices, and in general the produced triangulations are neither minimum nor minimal. In order to obtain a triangulation which is close to minimum, the Minimum Degree heuristic is widely used in practice, but until now little was known on the theoretical mechanisms involved. 相似文献
13.
Hongmei He Ondrej Sýkora Ana Salagean Erkki Mäkinen 《Journal of Parallel and Distributed Computing》2007
Genetic algorithms (GAs) have been applied to solve the 2-page crossing number problem successfully, but since they work with one global population, the search time and space are limited. Parallelisation provides an attractive prospect to improve the efficiency and solution quality of GAs. This paper investigates the complexity of parallel genetic algorithms (PGAs) based on two evaluation measures: computation time to communication time and population size to chromosome size. Moreover, the paper unifies the framework of PGA models with the function PGA (subpopulation size, cluster size, migration period, topology), and explores the performance of PGAs for the 2-page crossing number problem. 相似文献
14.
Celso Ribeiro 《Parallel Computing》1984,1(3-4):287-294
We study the performance and the use of vector computers for the solution of combinatorial optimization problems, particularly dynamic programming and shortest path problems. A general model for performance evaluation and vector implementations for the problems described above are studied. These implementations were done on a CRAY-1 vector computer and the computational results obtained show (i) the adequacy of the performance evaluation model and (ii) very important gains concerning computing times, showing that vector computers will be of great importance in the field of combinatorial optimization. 相似文献
15.
Parallel updates of minimum spanning trees (MSTs) have been studied in the past. These updates allowed a single change in the underlying graph, such as a change in the cost of an edge or an insertion of a new vertex. Multiple update problems for MSTs are concerned with handling more than one such change. In the sequential case multiple update problems may be solved using repeated applications of an efficient algorithm for a single update. However, for efficiency reasons, parallel algorithms for multiple update problems must consider all changes to the underlying graph simultaneously. In this paper we describe parallel algorithms for updating an MST whenk new vertices are inserted or deleted in the underlying graph, when the costs ofk edges are changed, or whenk edge insertions and deletions are performed. For multiple vertex insertion update, our algorithm achieves time and processor bounds ofO(log n·logk) and nk/(logn·logk), respectively, on a CREW parallel random access machine. These bounds are optimal for dense graphs. A novel feature of this algorithm is a transformation of the previous MST andk new vertices to a bipartite graph which enables us to obtain the above-mentioned bounds. 相似文献
16.
Diffusion curves are a powerful vector graphic representation that stores an image as a set of 2D Bezier curves with colors defined on either side. These colors are diffused over the image plane, resulting in smooth color regions as well as sharp boundaries. In this paper, we introduce a new automatic diffusion curve coloring algorithm. We start by defining a geometric heuristic for the maximum density of color control points along the image curves. Following this, we present a new algorithm to set the colors of these points so that the resulting diffused image is as close as possible to a source image in a least squares sense. We compare our coloring solution to the existing one which fails for textured regions, small features, and inaccurately placed curves. The second contribution of the paper is to extend the diffusion curve representation to include texture details based on Gabor noise. Like the curves themselves, the defined texture is resolution independent, and represented compactly. We define methods to automatically make an initial guess for the noise texure, and we provide intuitive manual controls to edit the parameters of the Gabor noise. Finally, we show that the diffusion curve representation itself extends to storing any number of attributes in an image, and we demonstrate this functionality with image stippling an hatching applications. 相似文献
17.
Eunice E. Santos Jeffrey M. Rickman Gayathri Muthukrishnan Shuangtong Feng 《The Journal of supercomputing》2008,44(3):274-290
In this paper, we design and implement a variety of parallel algorithms for both sweep spin selection and random spin selection.
We analyze our parallel algorithms on LogP, a portable and general parallel machine model. We then obtain rigorous theoretical
runtime results on LogP for all the parallel algorithms. Moreover, a guiding equation is derived for choosing data layouts
(blocked vs. stripped) for sweep spin selection. In regard to random spin selection, we are able to develop parallel algorithms
with efficient communication schemes. We introduce two novel schemes, namely the FML scheme and the α-scheme. We analyze randomness of our schemes using statistical methods and provide comparisons between the different schemes. 相似文献
18.
David A. Bader Joseph Jájá David Harwood Larry S. Davis 《The Journal of supercomputing》1996,10(2):141-168
This paper presents efficient and portable implementations of a powerful image enhancement process, the Symmetric Neighborhood Filter (SNF), and an image segmentation technique that makes use of the SNF and a variant of the conventional connected components algorithm which we call -Connected Components. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient connected components algorithm based on a novel approach for parallel merging. The algorithms have been coded in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Cray Research T3D, Meiko Scientific CS-2, Intel Paragon, and workstation clusters. Our experimental results are consistent with the theoretical analysis (and provide the best known execution times for segmentation, even when compared with machine-specific implementations). Our test data include difficult images from the Landsat Thematic Mapper (TM) satellite data.Also affiliated with the Department of Electrical Engineering.Also affiliated with the Department of Computer Science and the Center for Automation Research. 相似文献
19.
In this paper, new measures—called clustering performance measures (CPMs)—for assessing the reliability of a clustering algorithm
are proposed. These CPMs are defined using a validation measure, which determines how well the algorithm works with a given set of parameter values, and a repeatability measure, which is used for studying the stability of the clustering solutions and has the ability to estimate the correct number
of clusters in a dataset. These proposed CPMs can be used to evaluate clustering algorithms that have a structure bias to
certain types of data distribution as well as those that have no structure biases. Additionally, we propose a novel cluster
validity index, V
I
index, which is able to handle non-spherical clusters. Five clustering algorithms on different types of real-world data and
synthetic data are evaluated. The first dataset type refers to a communications signal dataset representing one modulation
scheme under a variety of noise conditions, the second represents two breast cancer datasets, while the third type represents
different synthetic datasets with arbitrarily shaped clusters. Additionally, comparisons with other methods for estimating
the number of clusters indicate the applicability and reliability of the proposed cluster validity
V
I
index and repeatability measure for correct estimation of the number of clusters.
Sameh A. Salem graduated with a BSc degree in Communications and Electronics Engineering and an MSc in Communications and Electronics Engineering, both from Helwan University, Cairo, Egypt, in May 1998 and October 2003, respectively. He is currently pursuing PhD degree in the Signal Processing and Communications Group, Department of Electrical Engineering and Electronics, The University of Liverpool, UK. His research interests include clustering algorithms, machine learning, and parallel computing. Asoke K. Nandi received PhD degree from the University of Cambridge (Trinity College), Cambridge, UK, in 1979. He held several research positions in Rutherford Appleton Laboratory (UK), European Organisation for Nuclear Research (Switzerland), Department of Physics, Queen Mary College (London, UK) and Department of Nuclear Physics (Oxford, UK). In 1987, he joined the Imperial College, London, UK, as the Solartron Lecturer in the Signal Processing Section of the Electrical Engineering Department. In 1991, he joined the Signal Processing Division of the Electronic and Electrical Engineering Department in the University of Strathclyde, Glasgow, UK, as a Senior Lecturer; subsequently, he was appointed as a Reader in 1995 and a Professor in 1998. In March 1999, he moved to the University of Liverpool, Liverpool, UK to take up his appointment with David Jardine, Chair of Signal Processing in the Department of Electrical Engineering and Electronics. In 1983, he was a member of the UA1 team at CERN that discovered the three fundamental particles known as W+, W− and Z0 providing the evidence for the unification of the electromagnetic and weak forces, which was recognised by the Nobel Committee for Physics in 1984. Currently, he is the Head of the Signal Processing and Communications Research Group with interests in the areas of non-Gaussian signal processing, communications, and machine learning research. With his group he has been carrying out research in machine condition monitoring, signal modelling, system identification, communication signal processing, biomedical signals, ultrasonics, blind source separation, and blind deconvolution. He has authored or co-authored over 350 technical publications, including two books “Automatic Modulation Recognition of Communications Signals” (Kluwer Academic, Boston, MA, 1996) and “Blind Estimation Using Higher-Order Statistics” (Kluwer Academic, Boston, MA, 1999) and over 140 journal papers. Professor Nandi was awarded the Mounbatten Premium, Division Award of the Electronics and Communications Division, of the Institution of Electrical Engineers of the UK in 1998 and the Water Arbitration Prize of the Institution of Mechanical Engineers of the UK in 1999. He is a Fellow of the Cambridge Philosophical Society, the Institution of Engineering and Technology, the Institute of Mathematics and its applications, the Institute of Physics, the Royal Society for Arts, the Institution of Mechanical Engineers, and the British Computer Society. 相似文献
Asoke K. NandiEmail: |
Sameh A. Salem graduated with a BSc degree in Communications and Electronics Engineering and an MSc in Communications and Electronics Engineering, both from Helwan University, Cairo, Egypt, in May 1998 and October 2003, respectively. He is currently pursuing PhD degree in the Signal Processing and Communications Group, Department of Electrical Engineering and Electronics, The University of Liverpool, UK. His research interests include clustering algorithms, machine learning, and parallel computing. Asoke K. Nandi received PhD degree from the University of Cambridge (Trinity College), Cambridge, UK, in 1979. He held several research positions in Rutherford Appleton Laboratory (UK), European Organisation for Nuclear Research (Switzerland), Department of Physics, Queen Mary College (London, UK) and Department of Nuclear Physics (Oxford, UK). In 1987, he joined the Imperial College, London, UK, as the Solartron Lecturer in the Signal Processing Section of the Electrical Engineering Department. In 1991, he joined the Signal Processing Division of the Electronic and Electrical Engineering Department in the University of Strathclyde, Glasgow, UK, as a Senior Lecturer; subsequently, he was appointed as a Reader in 1995 and a Professor in 1998. In March 1999, he moved to the University of Liverpool, Liverpool, UK to take up his appointment with David Jardine, Chair of Signal Processing in the Department of Electrical Engineering and Electronics. In 1983, he was a member of the UA1 team at CERN that discovered the three fundamental particles known as W+, W− and Z0 providing the evidence for the unification of the electromagnetic and weak forces, which was recognised by the Nobel Committee for Physics in 1984. Currently, he is the Head of the Signal Processing and Communications Research Group with interests in the areas of non-Gaussian signal processing, communications, and machine learning research. With his group he has been carrying out research in machine condition monitoring, signal modelling, system identification, communication signal processing, biomedical signals, ultrasonics, blind source separation, and blind deconvolution. He has authored or co-authored over 350 technical publications, including two books “Automatic Modulation Recognition of Communications Signals” (Kluwer Academic, Boston, MA, 1996) and “Blind Estimation Using Higher-Order Statistics” (Kluwer Academic, Boston, MA, 1999) and over 140 journal papers. Professor Nandi was awarded the Mounbatten Premium, Division Award of the Electronics and Communications Division, of the Institution of Electrical Engineers of the UK in 1998 and the Water Arbitration Prize of the Institution of Mechanical Engineers of the UK in 1999. He is a Fellow of the Cambridge Philosophical Society, the Institution of Engineering and Technology, the Institute of Mathematics and its applications, the Institute of Physics, the Royal Society for Arts, the Institution of Mechanical Engineers, and the British Computer Society. 相似文献
20.
This paper presents a new unified subdivision scheme that is defined over a k-simplicial complex in n-D space with k≤3. We first present a series of definitions to facilitate topological inquiries during the subdivision process. The scheme is derived from the double (k+1)-directional box splines over k-simplicial domains. Thus, it guarantees a certain level of smoothness in the limit on a regular mesh. The subdivision rules are modified by spatial averaging to guarantee C1 smoothness near extraordinary cases. Within a single framework, we combine the subdivision rules that can produce 1-, 2-, and 3-manifolds in arbitrary n-D space. Possible solutions for non-manifold regions between the manifolds with different dimensions are suggested as a form of selective subdivision rules according to user preference. We briefly describe the subdivision matrix analysis to ensure a reasonable smoothness across extraordinary topologies, and empirical results support our assumption. In addition, through modifications, we show that the scheme can easily represent objects with singularities, such as cusps, creases, or corners. We further develop local adaptive refinement rules that can achieve level-of-detail control for hierarchical modeling. Our implementation is based on the topological properties of a simplicial domain. Therefore, it is flexible and extendable. We also develop a solid modeling system founded on our subdivision schemes to show potential benefits of our work in industrial design, geometric processing, and other applications. 相似文献