首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 425 毫秒
1.
In the present paper, a novel graph-based approach to the shape decomposition problem is addressed. The shape is appropriately transformed into a visibility graph enriched with local neighborhood information. A two-step diffusion process is then applied to the visibility graph that efficiently enhances the information provided, thus leading to a more robust and meaningful graph construction. Inspired by the notion of a clique as a strict cluster definition, the dominant sets algorithm is invoked, slightly modified to comport with the specific problem of defining shape parts. The cluster cohesiveness and a node participation vector are two important outputs of the proposed graph partitioning method. Opposed to most of the existing techniques, the final number of the clusters is determined automatically, by estimating the cluster cohesiveness on a random network generation process. Experimental results on several shape databases show the effectiveness of our framework for graph-based shape decomposition.  相似文献   

2.
The authors introduce the notion of compatible star decompositions of simple polygons. In general, given two polygons with a correspondence between their vertices, two polygonal decompositions of the two polygons are said to be compatible if there exists a one-to-one mapping between them such that the corresponding pieces are defined by corresponding vertices. For compatible star decompositions, they also require correspondence between star points of the star pieces. Compatible star decompositions have applications in computer animation and shape representation and analysis. They present two algorithms for constructing compatible star decompositions of two simple polygons. The first algorithm is optimal in the number of pieces in the decomposition, providing that such a decomposition exists without adding Steiner vertices. The second algorithm constructs compatible star decompositions with Steiner vertices, which are not minimal in the number of pieces but are asymptotically worst-case optimal in this number and in the number of added Steiner vertices. They prove that some pairs of polygons require Ω(n2) pieces, and that the decompositions computed by the second algorithm possess no more than O(n2) pieces. In addition to the contributions regarding compatible star decompositions, the paper also corrects an error in the only previously published polynomial algorithm for constructing a minimal star decomposition of a simple polygon, an error which might lead to a nonminimal decomposition  相似文献   

3.
Smooth reverse subdivision   总被引:1,自引:0,他引:1  
In this paper we present a new multiresolution framework that takes into consideration reducing the coarse points’ energy during decomposition. We start from initial biorthogonal filters to include energy minimization in multiresolution. Decomposition and reconstruction are main operations for any multiresolution representation. We formulate decomposition as smooth reverse subdivision, based on a least squares problem. Both approximation of overall shape and energy are taken into account in the least squares formulation through different weights.Using this method, significant smoothness in decomposition of curves and tensor product surfaces can be achieved; while their overall shape is preserved. Having smooth coarse points yields details with maximum characteristics. Our method works well with synthesizing applications in which re-using high-energy details is important. We use our method for finding the smooth reverse of three common subdivision schemes. We also provide examples of our method in curve synthesis and terrain synthesis applications.  相似文献   

4.
通过综合运用人脸空间的超球流形约束、基于梯度的启发式全局优化、光照的球面谐波描述以及凸包可见点集的直接消隐方法,提出一种三维可形变模型的图像匹配方法.首先通过形状超球流形约束下的全局优化算法求解摄像机参数和形状参数,然后使用以上参数和凸包点集的直接消隐方法确定物像点对应关系,最后根据物像点对应关系由反射率超球流形约束下的全局优化算法求解光照参数和反射率参数.定量的对比实验结果表明,该方法无需借助分区域拟合、人为估计参数值、层次匹配策略或复杂的特征组合,即可由单幅图像恢复三维可形变模型(3DMM)的全部参数.  相似文献   

5.
An Information Theory Framework for the Analysis of Scene Complexity   总被引:3,自引:0,他引:3  
In this paper we present a new framework for the analysis of scene visibility and radiosity complexity. We introduce a number of complexity measures from information theory quantifying how difficult it is to compute with accuracy the visibility and radiosity in a scene. We define the continuous mutual information as a complexity measure of a scene, independent of whatever discretisation, and discrete mutual information as the complexity of a discretised scene. Mutual information can be understood as the degree of correlation or dependence between all the points or patches of a scene. Thus, low complexity corresponds to low correlation and vice versa. Experiments illustrating that the best mesh of a given scene among a number of alternatives corresponds to the one with the highest discrete mutual information, indicate the feasibility of the approach. Unlike continuous mutual information, which is very cheap to compute, the computation of discrete mutual information can however be quite demanding. We will develop cheap complexity measure estimates and derive practical algorithms from this framework in future work.  相似文献   

6.
We present a global method for consistently orienting a defective raw point set with noise, non-uniformities and thin sharp features. Our method seamlessly combines two simple but effective techniques—constrained Laplacian smoothing and visibility voting—to tackle this challenge. First, we apply a Laplacian contraction to the given point cloud, which shrinks the shape a little bit. Each shrunk point corresponds to an input point and shares a visibility confidence assigned by voting from multiple viewpoints. The confidence is increased (resp. decreased) if the input point (resp. its corresponding shrunk point) is visible. Then, the initial normals estimated by principal component analysis are flipped according to the contraction vectors from shrunk points to the corresponding input points and the visibility confidence. Finally, we apply a Laplacian smoothing twice to correct the orientation of points with zero or low confidence. Our method is conceptually simple and easy to implement, without resorting to any complicated data structures and advanced solvers. Numerous experiments demonstrate that our method can orient the defective raw point clouds in a consistent manner. By taking advantage of our orientation information, the classical implicit surface reconstruction algorithms can faithfully generate the surface.  相似文献   

7.
Several fast global illumination algorithms rely on the Virtual Point Lights framework. This framework separates illumination into two steps: first, propagate radiance in the scene and store it in virtual lights, then gather illumination from these virtual lights. To accelerate the second step, virtual lights and receiving points are grouped hierarchically, for example using Multi-Dimensional Lightcuts. Computing visibility between clusters of virtual lights and receiving points is a bottleneck. Separately, matrix completion algorithms reconstruct completely a low-rank matrix from an incomplete set of sampled elements. In this paper, we use adaptive matrix completion to approximate visibility information after an initial clustering step. We reconstruct visibility information using as little as 10 % to 20 % samples for most scenes, and combine it with shading information computed separately, in parallel on the GPU. Overall, our method computes global illumination 3 or more times faster than previous state-of-the-art methods.  相似文献   

8.
Projectively invariant decomposition and recognition of planar shapes   总被引:1,自引:0,他引:1  
An algorithm is presented for computing a decomposition of planar shapes into convex subparts represented. by ellipses. The method is invariant to projective transformations of the shape, and thus the conic primitives can be used for matching and definition of invariants in the same way as points and lines. The method works for arbitrary planar shapes admitting at least four distinct tangents and it is based on finding ellipses with four points of contact to the given shape. The cross ratio computed from the four points on the ellipse can then be used as a projectively invariant index. It is demonstrated that a given shape has a unique parameter-free decomposition into a finite set of ellipses with unit cross ratio. For a given shape, each pair of ellipses can be used to compute two independent projective invariants. The set of invariants computed for each ellipse pair can be used as indexes to a hash table from which model hypothesis can be generated Examples of shape decomposition and recognition are given for synthetic shapes and shapes extracted from grey level images of real objects using edge detection.  相似文献   

9.
Effects of Errors in the Viewing Geometry on Shape Estimation   总被引:2,自引:0,他引:2  
A sequence of images acquired by a moving sensor contains information about the three-dimensional motion of the sensor and the shape of the imaged scene. Interesting research during the past few years has attempted to characterize the errors that arise in computing 3D motion (egomotion estimation) as well as the errors that result in the estimation of the scene's structure (structure from motion). Previous research is characterized by the use of optic flow or correspondence of features in the analysis as well as by the employment of particular algorithms and models of the scene in recovering expressions for the resulting errors. This paper presents a geometric framework that characterizes the relationship between 3D motion and shape in the presence of errors. We examine how the three-dimensional space recovered by a moving monocular observer, whose 3D motion is estimated with some error, is distorted. We characterize the space of distortions by its level sets, that is, we characterize the systematic distortion via a family of iso-distortion surfaces, which describes the locus over which the depths of points in the scene in view are distorted by the same multiplicative factor. The framework introduced in this way has a number of applications: Since the visible surfaces have positive depth (visibility constraint), by analyzing the geometry of the regions where the distortion factor is negative, that is, where the visibility constraint is violated, we make explicit situations which are likely to give rise to ambiguities in motion estimation, independent of the algorithm used. We provide a uniqueness analysis for 3D motion analysis from normal flow. We study the constraints on egomotion, object motion, and depth for an independently moving object to be detectable by a moving observer, and we offer a quantitative account of the precision needed in an inertial sensor for accurate estimation of 3D motion.  相似文献   

10.
Segmentation and tracking of multiple humans in crowded situations is made difficult by interobject occlusion. We propose a model based approach to interpret the image observations by multiple, partially occluded human hypotheses in a Bayesian framework. We define a joint image likelihood for multiple humans based on the appearance of the humans, the visibility of body obtained by occlusion reasoning, and foreground/background separation. The optimal solution is obtained by using an efficient sampling method, data-driven Markov chain Monte Carlo (DDMCMC), which uses image observations for proposal probabilities. Knowledge of various aspects including human shape, camera model, and image cues are integrated in one theoretically sound framework. We present experimental results and quantitative evaluation, demonstrating that the resulting approach is effective for very challenging data.  相似文献   

11.
We developed a new framework to generate hand and finger grasping motions. The proposed framework provides online adaptation to the position and orientation of objects and can generate grasping motions even when the object shape differs from that used during motion capture. This is achieved by using a mesh model, which we call primitive object grasping (POG), to represent the object grasping motion. The POG model uses a mesh deformation algorithm that keeps the original shape of the mesh while adapting to varying constraints. These characteristics are beneficial for finger grasping motion synthesis that satisfies constraints for mimicking the motion capture sequence and the grasping points reflecting the shape of the object. We verify the adaptability of the proposed motion synthesizer according to its position/orientation and shape variations of different objects by using motion capture sequences for grasping primitive objects, namely, a sphere, a cylinder, and a box. In addition, a different grasp strategy called a three‐finger grasp is synthesized to validate the generality of the POG‐based synthesis framework.  相似文献   

12.
本文采用AlexNet神经网络算法构建一个高速公路能见度识别的框架,通过对道路摄像头图像的采集,对图像进行标注、对AlexNet算法进行训练,提取图像能见度特征,构建能见度等级识别模型,实时接入道路摄像头图像,实现能见度值的估测。通过对安徽省高速公路42个监控摄像机进行图像的采集,抽取标注有能见度值的15万余幅样本,进行能见度识别结果分析,结果显示平均识别率达到78.02%,其中有14个站点的识别率超过90%,21个站点的识别率在80%以上。基于AlexNet算法的道路能见度估测方法能够满足道路能见度实时性和识别准确率的要求,可以作为能见度仪未安装地区的能见度辅助监测方法,同时对于光照变化、远近距离等都具有良好的鲁棒性。  相似文献   

13.
We present an effective optimization framework to compute polycube mapping. Composed of a set of small cubes, a polycube well approximates the geometry of the free-form model yet possesses great regularity; therefore, it can serve as a nice parametric domain for free-form shape modeling and analysis. Generally, the more cubes are used to construct the polycube, the better the shape can be approximated and parameterized with less distortion. However, corner points of a polycube domain are singularities of this parametric representation, so a polycube domain having too many corners is undesirable. We develop an iterative algorithm to seek for the optimal polycube domain and mapping, with the constraint on using a restricted number of cubes (therefore restricted number of corner points). We also use our polycube mapping framework to compute an optimal common polycube domain for multiple objects simultaneously for lowly distorted consistent parameterization.  相似文献   

14.
This paper presents a new and effective method to construct manifold T-splines of complicated topology/geometry. The fundamental idea of our novel approach is the geometry-aware object segmentation, by which an arbitrarily complicated surface model can be decomposed into a group of disjoint components that comprise branches, handles, and base patches. Such a domain decomposition simplifies objects of arbitrary topological type into a family of genus-zero/one open surfaces, each of which can be conformally parameterized into a set of rectangles. In contrast to the conventional decomposition approaches, our method can guarantee that the cutting locus are consistent on the parametric domain. As a result, the resultant T-splines of decomposed components are automatically glued and have high-order continuity everywhere except at the extraordinary points. We show that the number of extraordinary points of the domain manifold is bounded by the number of segmented components. Furthermore, the entire mesh-to-spline data conversion pipeline can be implemented with full automation, and thus, has potential in shape modeling and reverse engineering applications of complicated real-world objects.  相似文献   

15.
Strategies for shape matching using skeletons   总被引:4,自引:0,他引:4  
  相似文献   

16.
We present a novel algorithm, IlluminationCut, for rendering images using the many‐lights framework. It handles any light source that can be approximated with virtual point lights (VPLs) as well as highly glossy materials. The algorithm extends the Multidimensional Lightcuts technique by effectively creating an illumination‐aware clustering of the product‐space of the set of points to be shaded and the set of VPLs. Additionally, the number of visibility queries for each product‐space cluster is reduced by using an adaptive sampling technique. Our framework is flexible and achieves around 3 – 6 times speedup over previous state‐of‐the‐art methods.  相似文献   

17.
The binary tree, quadtree, and octree decomposition techniques are widely used in computer graphics and image processing problems. Here, the techniques are reexamined for pattern recognition and shape analysis applications. It has been shown that the quadtree and octree techniques can be used to find the shape hull of a set of points in space while their n-dimensional generalization can be used for divisive hierarchical clustering. Similarly, an n-dimensional binary tree decomposition of feature space can be used for efficient pattern classifier design. Illustrative examples are presented to show the usefulness and efficiency of these hierarchical decomposition techniques.  相似文献   

18.
The notion of parts in a shape plays an important role in many geometry problems, including segmentation, correspondence, recognition, editing, and animation. As the fundamental geometric representation of 3D objects in computer graphics is surface-based, solutions of many such problems utilize a surface metric, a distance function defined over pairs of points on the surface, to assist shape analysis and understanding. The main contribution of our work is to bring together these two fundamental concepts: shape parts and surface metric. Specifically, we develop a surface metric that is part-aware. To encode part information at a point on a shape, we model its volumetric context – called the volumetric shape image (VSI) – inside the shape's enclosed volume, to capture relevant visibility information. We then define the part-aware metric by combining an appropriate VSI distance with geodesic distance and normal variation. We show how the volumetric view on part separation addresses certain limitations of the surface view, which relies on concavity measures over a surface as implied by the well-known minima rule. We demonstrate how the new metric can be effectively utilized in various applications including mesh segmentation, shape registration, part-aware sampling and shape retrieval.  相似文献   

19.
《Advanced Robotics》2013,27(13-14):1627-1650
In this paper, we investigate the problem of minimizing the average time required to find an object in a known three-dimensional environment. We consider a 7-d.o.f. mobile manipulator with an 'eye-in-hand' sensor. In particular, we address the problem of searching for an object whose unknown location is characterized by a known probability density function. We present a discrete formulation, in which we use a visibility-based decomposition of the environment. We introduce a sample-based convex cover to estimate the size and shape of visibility regions in three dimensions. The resulting convex regions are exploited to generate trajectories that make a compromise between moving the manipulator base and moving the robotic arm. We also propose a practical method to approximate the visibility region in three dimensions of a sensor limited in both range and field of view. The quality and success of the generated paths depend significantly on the sensing robot capabilities. In this paper, we generate searching plans for a mobile manipulator equipped with a sensor limited in both field of view and range. We have implemented the algorithm and present simulation results.  相似文献   

20.
In this paper, the complexity of minimum corridor guarding problems is discussed. These problems can be described as: given a connected orthogonal arrangement of vertical and horizontal line segments and a guard with unlimited visibility along a line segment, find a tree or a closed walk with minimum total length along edges of the arrangement, such that if the guard runs on the tree or on the closed walk, all line segments are visited by the guard. These problems are proved to be NP-complete.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号