共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
In this paper, two perceptually motivated morphological strategies (PMMS) are proposed to enhance the retrieval performance of common shape matching methods. Firstly, two human perception customs are introduced, which have important relations to shape retrieval. Secondly, these two customs are properly modeled by morphological operations. Finally, the proposed PMMS is applied to improve the retrieval performances of a popular shape matching method named Inner-Distance Shape Contexts (IDSC), and then the Locally Constrained Diffusion Process (LCDP) method is exploited to further enhance the retrieval performance. This combination achieves a retrieval rate of 98.56% on MPEG-7 dataset. We also conduct the experiments on Swedish Leaf dataset, the ETH-80 dataset and the Natural Silhouette dataset. The experimental results obtained from four datasets demonstrate clearly the effectiveness of the proposed method. 相似文献
3.
The properties of the human visual system are taken into account, along with the geometric aspects of an object, in a new surface remeshing algorithm and a new mesh simplification algorithm. Both algorithms have a preprocessing step and are followed by the remeshing or mesh simplification steps. The preprocessing step computes an importance map that indicates the visual masking potential of the visual patterns on the surface. The importance map is then used to guide the remeshing or mesh simplification algorithms. Two different methods are proposed for computing an importance map that indicates the masking potential of the visual patterns on the surface. The first one is based on the Sarnoff visual discrimination metric, and the second one is inspired by the visual masking tool available in the current JPEG2000 standard. Given an importance map, the surface remeshing algorithm automatically distributes few samples to surface regions with strong visual masking properties due to surface texturing, lighting variations, bump mapping, surface reflectance and inter-reflections. Similarly, the mesh simplification algorithm simplifies more aggressively where the light field of an object can hide more geometric artifacts. 相似文献
4.
Jörg Haber Karol Myszkowski Hitoshi Yamauchi & Hans-Peter Seidel 《Computer Graphics Forum》2001,20(3):142-152
One of the basic difficulties with interactive walkthroughs is the high quality rendering of object surfaces with non-diffuse light scattering characteristics. Since full ray tracing at interactive rates is usually impossible, we render a precomputed global illumination solution using graphics hardware and use remaining computational power to correct the appearance of non-diffuse objects on-the-fly. The question arises, how to obtain the best image quality as perceived by a human observer within a limited amount of time for each frame. We address this problem by enforcing corrective computation for those non-diffuse objects that are selected using a computational model of visual attention. We consider both the saliency- and task-driven selection of those objects and benefit from the fact that shading artifacts of "unattended" objects are likely to remain unnoticed. We use a hierarchical image-space sampling scheme to control ray tracing and splat the generated point samples. The resulting image converges progressively to a ray traced solution if the viewing parameters remain unchanged. Moreover, we use a sample cache to enhance visual appearance if the time budget for correction has been too low for some frame. We check the validity of the cached samples using a novel criterion suited for non-diffuse surfaces and reproject valid samples into the current view. 相似文献
5.
Rectilinearity measurements for polygons 总被引:1,自引:0,他引:1
Zunic J. Rosin P.L. 《IEEE transactions on pattern analysis and machine intelligence》2003,25(9):1193-1200
The paper introduces a shape measure intended to describe the extent to which a closed polygon is rectilinear. Other than somewhat obvious measures of rectilinearity (e.g., the sum of the differences of each corner's angle from multiples of 90/spl deg/), there has been little work in deriving a measure that is straightforward to compute, is invariant under scale, rotation, and translation, and corresponds with the intuitive notion of rectilinear shapes. There are applications in a number of different areas of computer vision and photogrammetry. Rectilinear structures often correspond to human-made objects and are therefore justified as attentional cues for further processing. For instance, in aerial image processing and reconstruction, where building footprints are often rectilinear on the local ground plane, building structures, once recognized as rectilinear, can be matched to corresponding shapes in other views for stereo reconstruction. Perceptual grouping algorithms may seek to complete shapes based on the assumption that the object in question is rectilinear. Using the proposed measure, such systems can verity this assumption. 相似文献
6.
Palma-Amestoy R. Provenzi E. Bertalmio M. Caselles V. 《IEEE transactions on pattern analysis and machine intelligence》2009,31(3):458-474
Basic phenomenology of human color vision has been widely taken as an inspiration to devise explicit color correction algorithms. The behavior of these models in terms of significative image features (such as, e.g., contrast and dispersion) can be difficult to characterize. To cope with this, we propose to use a variational formulation of color contrast enhancement that is inspired by the basic phenomenology of color perception. In particular, we devise a set of basic requirements to be fulfilled by an energy to be considered as 'perceptually inspired', showing that there is an explicit class of functionals satisfying all of them. We single out three explicit functionals that we consider of basic interest, showing similarities and differences with existing models. The minima of such functionals is computed using a gradient descent approach. We also present a general methodology to reduce the computational cost of the algorithms under analysis from O(N2) to O(N logN), being N the number of pixels of the input image. 相似文献
7.
We introduce a novel concept called perceptually meaningful image editing and present techniques for manipulating the apparent depth of objects and creating the illusion of motion in 2D images. Our techniques combine principles of human visual perception with approaches developed by traditional artists. For our depth manipulation technique, the user loads an image, selects an object and specifies whether the object should appear closer or further away. The system automatically determines luminance or color temperature target values for the object and/or background that achieve the desired depth change. Our approach for creating the illusion of motion exploits the differences between our peripheral vision and our foveal vision by introducing spatial imprecision to the image. 相似文献
8.
9.
A key problem in computational geometry is the identification of subsets of a point set having particular properties. We study this problem for the properties of convexity and emptiness. We show that finding empty triangles is related to the problem of determining pairs of vertices that see each other in a star-shaped polygon. A linear-time algorithm for this problem which is of independent interest yields an optimal algorithm for finding all empty triangles. This result is then extended to an algorithm for finding empty convex r-gons (r> 3) and for determining a largest empty convex subset. Finally, extensions to higher dimensions are mentioned. 相似文献
10.
We present an algorithm for finding optimum partitions of simple monotone rectilinear polygons into star-shaped polygons. The algorithm may introduce Steiner points and its time complexity isO(n), wheren is the number of vertices in the polygon. We then use this algorithm to obtain anO(n logn) approximation algorithm for partitioning simple rectilinear polygons into star-shaped polygons with the size of the partition being at most six times the optimum. 相似文献
11.
12.
A key problem in computational geometry is the identification of subsets of a point set having particular properties. We study this problem for the properties of convexity and emptiness. We show that finding empty triangles is related to the problem of determining pairs of vertices that see each other in a star-shaped polygon. A linear-time algorithm for this problem which is of independent interest yields an optimal algorithm for finding all empty triangles. This result is then extended to an algorithm for finding empty convex r-gons (r> 3) and for determining a largest empty convex subset. Finally, extensions to higher dimensions are mentioned.The first author is pleased to acknowledge support by the National Science Foundation under Grant CCR-8700917. The research of the second author was supported by Amoco Foundation Faculty Development Grant CS 1-6-44862 and by the National Science Foundation under Grant CCR-8714565. 相似文献
13.
We present an algorithm for finding optimum partitions of simple monotone rectilinear polygons into star-shaped polygons. The algorithm may introduce Steiner points and its time complexity isO(n), wheren is the number of vertices in the polygon. We then use this algorithm to obtain anO(n logn) approximation algorithm for partitioning simple rectilinear polygons into star-shaped polygons with the size of the partition being at most six times the optimum. 相似文献
14.
In this work we present a new algorithm to study the inclusion of points into polygons whose edges are curve segments. It is valid for closed planar polygons whose edges can be straight line segments, conic arcs or cubic Bézier curves. Also it is valid for manifold and non-manifold closed planar polygons with and without holes. Our algorithm is robust and simple, firstly because it avoids the use of equations systems in the inclusion test, and secondly because it solves special cases effectively and homogeneously. 相似文献
15.
Perceptually optimized 3D graphics 总被引:1,自引:0,他引:1
The author uses models of visual perception to remove nonperceptible components of a 3D computer graphics scene and optimize the system's performance. He considers how much detail can be removed from the scene without the user noticing, and how much added benefit these optimizations actually bring 相似文献
16.
This study presents a real-time texture transfer method for artistic style transfer for video stream. We propose a parallel framework using a T-shaped kernel to enhance the computational performance. With regard to accelerated motion estimation, which is necessarily required for maintaining temporal coherence, we present a method using a downscaled motion field to successfully achieve high real-time performance for texture transfer of video stream. In addition, to enhance the artistic quality, we calculate the level of abstraction using visual saliency and integrate it with the texture transfer algorithm. Thus, our algorithm can stylize video with perceptual enhancements. 相似文献
17.
Derick Wood 《Information Processing Letters》1984,19(5):229-236
The union of a set of p, not necessarily disjoint, rectilinear polygons in the plane determines a set of disjoint rectilinear polygons. We present an O(n log n + e) time and O(n) space algorithm to compute the edges of the disjoint polygons, that is, the contour, where n is the total number of vertices in the original polygons and e the total number in the resulting set. This time-and space-optimal algorithm uses the scan-line paradigm as in two previous approaches to this problem for rectangles, but requires a simpler data structure. Moreover, if the given rectilinear polygons are rectilinear convex, the space requirement is reduced to O(p). 相似文献
18.
Polygons are everywhere, but one place the author didn't expect to see polygons is in the Fourier transform, but he found them there as well. The Fourier transform is an indispensable tool in signal processing. In computer graphics, it helps us understand and cure problems as diverse as jaggies on the edge of polygons, blocky looking textures, and animated objects that appear to jump erratically as they move across the screen. His friend and colleague Alvy Ray Smith recently wrote a memo that demonstrated a surprising interpretation of the Fourier transform. He showed how in some circumstances the Fourier transform looks like nothing more than operations on regular polygons. The article is about that fascinating insight. He starts off with by using complex numbers to do geometry and then moves on to the Fourier series, building up to a discussion of the new interpretation 相似文献
19.
A new convexity measure for polygons 总被引:1,自引:0,他引:1
Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure. 相似文献
20.
Subir Kumar Ghosh Anil Maheshwari Sudebkumar Prasant Pal C. E. Veni Madhavan 《The Visual computer》1994,10(8):443-451
A polygonP is said to be apalm polygon if there exists a pointxP such that the Euclidean shortest path fromx to any pointyP makes only left turns or only right turns. The set of all such pointsx is called thepalm kernel. In this paper we propose an O(E) time algorithm for recognizing a palm polygonP, whereE is the size of the visibility graph ofP. The algorithm recognizes the given polygonP as a palm polygon by computing the palm kernel ofP. If the palm kernel is not empty,P is a palm polygon.The extended abstract of this paper was reported at the Second Canadian Conference in Computational Geometry, pp. 246–251, 1990 相似文献