首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Buildings with symmetrical façades are ubiquitous in urban landscapes and detailed models of these buildings enhance the visual realism of digital urban scenes. However, a vast majority of the existing urban building models in web‐based 3D maps such as Google earth are either less detailed or heavily rely on texturing to render the details. We present a new framework for enhancing the details of such coarse models, using the geometry and symmetry inferred from the light detection and ranging (LiDAR) scans and 2D templates. The user‐defined 2D templates, referred to as coded planar meshes (CPMs), encodes the geometry of the smallest repeating 3D structures of the façades via face codes. Our encoding scheme, take into account the directions, type as well as the offset distance of the sculpting to be applied at the respective locations on the coarse model. In our approach, LiDAR scan is registered with the coarse models taken from Google earth 3D or Bing maps 3D and decomposed into dominant planar segments (each representing the frontal or lateral walls of the building). The façade segments are then split into horizontal and vertical tiles using a weighted point count function defined over the window or door boundaries. This is followed by an automatic identification of CPM locations with the help of a template fitting algorithm that respects the alignment regularity as well as the inter‐element spacing on the façade layout. Finally, 3D boolean sculpting operations are applied over the boxes induced by CPMs and the coarse model, and a detailed 3D model is generated. The proposed framework is capable of modelling details even with occluded scans and enhances not only the frontal façades (facing to the streets) but also the lateral façades of the buildings. We demonstrate the potentials of the proposed framework by providing several examples of enhanced Google earth models and highlight the advantages of our method when designing photo‐realistic urban façades.  相似文献   

2.
Information extracted from aerial photographs is widely used in the fields of urban planning and design. An effective method for detecting buildings in aerial photographs is to use deep learning to understand the current state of a target region. However, the building mask images used to train the deep learning model must be manually generated in many cases. To overcome this challenge, a method has been proposed for automatically generating mask images by using textured three-dimensional (3D) virtual models with aerial photographs. Some aerial photographs include clouds, which degrade image quality. These clouds can be removed by using a generative adversarial network (GAN), which leads to improvements in training quality. Therefore, the objective of this research was to propose a method for automatically generating building mask images by using 3D virtual models with textured aerial photographs. In this study, using GAN to remove clouds in aerial photographs improved training quality. A model trained on datasets generated by the proposed method was able to detect buildings in aerial photographs with IoU = 0.651.  相似文献   

3.
Modern computer graphics applications usually require high resolution object models for realistic rendering.However,it is expensive and difficult to deform such models in real time.In order to reduce the computational cost during deformations,a dense model is often manipulated through a simplified structure,called cage,which envelops the model.However,cages are usually built interactively by users,which is tedious and time-consuming.In this paper,we introduce a novel method that can build cages automatically for both 2D polygons and 3D triangular meshes.The method consists of two steps:1) simplifying the input model with quadric error metrics and quadratic programming to build a coarse cage;2) removing the self-intersections of the coarse cage with Delaunay partitions.With this new method,a user can build a cage to envelop an input model either entirely or partially with the approximate vertex number the user specifies.Experimental results show that,compared to other cage building methods with the same number of vertex,cages built by our method are more similar to the input models.Thus,the dense models can be manipulated with higher accuracy through our cages.  相似文献   

4.
The design of video game environments, or levels, aims to control gameplay by steering the player through a sequence of designer‐controlled steps, while simultaneously providing a visually engaging experience. Traditionally these levels are painstakingly designed by hand, often from pre‐existing building blocks, or space templates. In this paper, we propose an algorithmic approach for automatically laying out game levels from user‐specified blocks. Our method allows designers to retain control of the gameplay flow via user‐specified level connectivity graphs, while relieving them from the tedious task of manually assembling the building blocks into a valid, plausible layout. Our method produces sequences of diverse layouts for the same input connectivity, allowing for repeated replay of a given level within a visually different, new environment. We support complex graph connectivities and various building block shapes, and are able to compute complex layouts in seconds. The two key components of our algorithm are the use of configuration spaces defining feasible relative positions of building blocks within a layout and a graph‐decomposition based layout strategy that leverages graph connectivity to speed up convergence and avoid local minima. Together these two tools quickly steer the solution toward feasible layouts. We demonstrate our method on a variety of real‐life inputs, and generate appealing layouts conforming to user specifications.  相似文献   

5.
Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other’s continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.  相似文献   

6.
We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as the starting point. Exploiting user defined knowledge, in term of a rough sketch of orthogonality and parallelism in scenes, we design smart interaction techniques to semi-automatically reconstruct a scene from coarse to fine level. The framework is flexible and efficient. Users can build a coarse walls-and-floor textured model in five mouse clicks, or a detailed model showing all furniture in a couple of minutes interaction. We show results of reconstruction on four different scenes. The accuracy of the reconstructed models is quite high, around 1% error at full room scale. Thus, our framework is a good choice for applications requiring accuracy as well as application requiring a 3D impression of the scene.  相似文献   

7.
In the emerging Virtual/Augmented Reality (VR/AR) era, three dimensional (3D) content will be popularized just as images and videos today. The security and privacy of these 3D contents should be taken into consideration. 3D contents contain surface models and solid models. Surface models include point clouds, meshes and textured models. Previous work mainly focused on the encryption of solid models, point clouds and meshes. This work focuses on the most complicated 3D textured model. We propose a 3D Lu chaotic mapping based encryption method for 3D textured models. We encrypt the vertices, polygons, and textures of 3D models separately using the 3D Lu chaotic mapping. Then the encrypted vertices, polygons and textures are composited together to form the final encrypted 3D textured model. The experimental results reveal that our method can encrypt and decrypt 3D textured models correctly. Furthermore, typical statistic and brute-force attacks can be resisted by the proposed method.  相似文献   

8.
Generic model abstraction from examples   总被引:3,自引:0,他引:3  
The recognition community has typically avoided bridging the representational gap between traditional, low-level image features and generic models. Instead, the gap has been artificially eliminated by either bringing the image closer to the models using simple scenes containing idealized, textureless objects or by bringing the models closer to the images using 3D CAD model templates or 2D appearance model templates. In this paper, we attempt to bridge the representational gap for the domain of model acquisition. Specifically, we address the problem of automatically acquiring a generic 2D view-based class model from a set of images, each containing an exemplar object belonging to that class. We introduce a novel graph-theoretical formulation of the problem in which we search for the lowest common abstraction among a set of lattices, each representing the space of all possible region groupings in a region adjacency graph representation of an input image. The problem is intractable and we present a shortest path-based approximation algorithm to yield an efficient solution. We demonstrate the approach on real imagery.  相似文献   

9.
Many existing approaches for image-to-geometry registration assume that either a textured 3D model or a good initial guess of the 3D pose is available to bootstrap the registration process. In this paper we consider the registration of photographs to 3D models even when no texture information is available. This is very challenging as we cannot rely on texture gradients, and even shading gradients are hard to estimate since the lighting conditions are unknown. To that end, we propose average shading gradients, a rendering technique that estimates the average gradient magnitude over all lighting directions under Lambertian shading. We use this gradient representation as the building block of a registration pipeline based on matching sparse features. To cope with inevitable false matches due to the missing texture information and to increase robustness, the pose of the 3D model is estimated in two stages. Coarse pose hypotheses are first obtained from a single correct match each, subsequently refined using SIFT flow, and finally verified. We apply our algorithm to registering images of real-world objects to untextured 3D meshes of limited accuracy. Moreover, we show that registration can be performed even for paintings despite lacking photo-realism.  相似文献   

10.
ABSTRACT

We propose a comprehensive strategy to reconstruct urban building geometry from three-dimensional (3D) point clouds. First, the point clouds are segmented using a rough-detail segmentation algorithm, and refinements guided by topological relationships are performed to rectify the segmentation mistakes. Then, the semantic features (such as facades and windows) that belong to the buildings are recognized and extracted. Next, each facade is cut into a sequence of slices. The initial models are recovered by sequentially detecting and connecting the anchor points. Finally, due to the regular arrangements of windows, a template-matching method relying on the similarity and repetitiveness of the windows is proposed to recover the details on building facades. The experimental results demonstrate that our method can automatically reconstruct the building geometry and detailed window structures are better depicted.  相似文献   

11.
Fast and robust generation of city-scale seamless 3D urban models   总被引:1,自引:0,他引:1  
Since the introduction of the concept of “Digital Earth”, almost every major international city has been re-constructed in the virtual world. A large volume of geometric models describing urban objects has become freely available in the public domain via software like ArcGlobe and Google Earth. Although mostly created for visualization, these urban models can benefit many applications beyond visualization including city scale evacuation planning and earth phenomenon simulations. However, these models are mostly loosely structured and implicitly defined and require tedious manual preparation that usually takes weeks if not months before they can be used. Designing algorithms that can robustly and efficiently handle unstructured urban models at the city scale becomes a main technical challenge. In this paper, we present a framework that generates seamless 3D architectural models from 2D ground plans with elevation and height information. These overlapping ground plans are commonly used in the current GIS software such as ESRI ArcGIS and urban model synthesis methods to depict various components of buildings. Due to measurement and manual errors, these ground plans usually contain small, sharp, and various (nearly) degenerate artifacts. In this paper, we show both theoretically and empirically that our framework is efficient and numerically stable. Based on our review of the related work, we believe this is the first work that attempts to automatically create 3D architectural meshes for simulation at the city level. With the goal of providing greater benefit beyond visualization from this large volume of urban models, our initial results are encouraging.  相似文献   

12.
Paper pop‐ups are interesting three‐dimensional books that fascinate people of all ages. The design and construction of these pop‐up books however are done manually and require a lot of time and effort. This has led to computer‐assisted or automated tools for designing paper pop‐ups. This paper proposes an approach for automatically converting a 3D model into a multi‐style paper pop‐up. Previous automated approaches have only focused on single‐style pop‐ups, where each is made of a single type of pop‐up mechanisms. In our work, we combine multiple styles in a pop‐up, which is more representative of actual artist's creations. Our method abstracts a 3D model using suitable primitive shapes that both facilitate the formation of the considered pop‐up mechanisms and closely approximate the input model. Each shape is then abstracted using a set of 2D patches that combine to form a valid pop‐up. We define geometric conditions that ensure the validity of the combined pop‐up structures. In addition, our method also employs an image‐based approach for producing the patches to preserve the textures, finer details and important contours of the input model. Finally, our system produces a printable design layout and decides an assembly order for the construction instructions. The feasibility of our results is verified by constructing the actual paper pop‐ups from the designs generated by our system.  相似文献   

13.
Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user‐clicked images. Our framework is based on a novel locally‐bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user‐clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness.  相似文献   

14.
Virtual 3D city models increasingly cover whole city areas; hence, the perception of complex urban structures becomes increasingly difficult. Using abstract visualization, complexity of these models can be hidden where its visibility is unnecessary, while important features are maintained and highlighted for better comprehension and communication. We present a technique to automatically generalize a given virtual 3D city model consisting of building models, an infrastructure network and optional land coverage data; this technique creates several representations of increasing levels of abstraction. Using the infrastructure network, our technique groups building models and replaces them with cell blocks, while preserving local landmarks. By computing a landmark hierarchy, we reduce the set of initial landmarks in a spatially balanced manner for use in higher levels of abstraction. In four application examples, we demonstrate smooth visualization of transitions between precomputed representations; dynamic landmark highlighting according to virtual camera distance; an implementation of a cognitively enhanced route representation, and generalization lenses to combine precomputed representations in focus + context visualization.  相似文献   

15.
We formulate a probabilistic framework for simultaneous region-based 2D segmentation and 2D to 3D pose tracking, using a known 3D model. Given such a model, we aim to maximise the discrimination between statistical foreground and background appearance models, via direct optimisation of the 3D pose parameters. The foreground region is delineated by the zero-level-set of a signed distance embedding function, and we define an energy over this region and its immediate background surroundings based on pixel-wise posterior membership probabilities (as opposed to likelihoods). We derive the differentials of this energy with respect to the pose parameters of the 3D object, meaning we can conduct a search for the correct pose using standard gradient-based non-linear minimisation techniques. We propose novel enhancements at the pixel level based on temporal consistency and improved online appearance model adaptation. Furthermore, straightforward extensions of our method lead to multi-camera and multi-object tracking as part of the same framework. The parallel nature of much of the processing in our algorithm means it is amenable to GPU acceleration, and we give details of our real-time implementation, which we use to generate experimental results on both real and artificial video sequences, with a number of 3D models. These experiments demonstrate the benefit of using pixel-wise posteriors rather than likelihoods, and showcase the qualities, such as robustness to occlusions and motion blur (and also some failure modes), of our tracker.  相似文献   

16.
In this paper, we propose a new method for reconstructing 3D models from a noisy and incomplete 3D scan and a coarse template model. The main idea is to maintain characteristic high‐level features of the template that remain unchanged for different variants of the same type of object. As invariants, we chose the partial symmetry structure of the template model under Euclidian transformations, i.e. we maintain the algebraic structure of all reflections, rotations and translations that map the object partially to itself. We propose an optimization scheme that maintains continuous and discrete symmetry properties of this kind while registering a template against scan data using a deformable iterative closest points (ICP) framework with thin‐plate‐spline regularization. We apply our new deformation approach to a large number of example data sets and demonstrate that symmetry‐guided template matching often yields much more plausible reconstructions than previous variants of ICP.  相似文献   

17.
Template-based 3D model fitting using dual-domain relaxation   总被引:1,自引:0,他引:1  
We introduce a template fitting method for 3D surface meshes. A given template mesh is deformed to closely approximate the input 3D geometry. The connectivity of the deformed template model is automatically adjusted to facilitate the geometric fitting and to ascertain high quality of the mesh elements. The template fitting process utilizes a specially tailored Laplacian processing framework, where in the first, coarse fitting stage we approximate the input geometry with a linearized biharmonic surface (a variant of LS-mesh), and then the fine geometric detail is fitted further using iterative Laplacian editing with reliable correspondence constraints and a local surface flattening mechanism to avoid foldovers. The latter step is performed in the dual mesh domain, which is shown to encourage near-equilateral mesh elements and significantly reduces the occurrence of triangle foldovers, a well-known problem in mesh fitting. To experimentally evaluate our approach, we compare our method with relevant state-of-the-art techniques and confirm significant improvements of results. In addition, we demonstrate the usefulness of our approach to the application of consistent surface parameterization (also known as cross-parameterization).  相似文献   

18.
We investigate the problem of automatically creating 3D models of man-made environments that we represent as collections of textured planes. A typical approach is to automatically reconstruct a sparse 3D model made of points, and to manually indicate their plane membership, as well as the delineation of the planes: this is the piecewise planar segmentation phase. Texture images are then extracted by merging perspectively corrected input images. We propose an automatic approach to the piecewise planar segmentation phase, that detects the number of planes to approximate the scene surface to some extent, and the parameters of these planes, from a sparse 3D model made of points. Our segmentation method is inspired from the robust estimator ransac. It generates and scores plane hypotheses by random sampling of the 3D points. Our plane scoring function and our plane comparison function, required to prevent detecting the same plane twice, are designed to detect planes with large or small support. The plane scoring function recovers the plane delineation and quantifies the saliency of the plane hypothesis based on approximate photoconsistency. We finally refine all the 3D model parameters, i.e., the planes and the points on these planes, as well as camera pose, by minimizing the reprojection error with respect to the measured image points, using bundle adjustment. The approach is validated on simulated and real data.  相似文献   

19.
Image-based hair modeling methods enable artists to produce abundant 3D hair models. However, the reconstructed hair models could not preserve the structural details, such as uniformly distributed hair roots, interior strands growing in line with real distribution and exterior strands similar to images. In this paper, we propose a novel approach to construct a realistic 3D hair model from a hybrid orientation field. Our hybrid orientation field is generated from four fields. The first field makes the surface structure of a hairstyle be similar to the input images as much as possible. The second field makes the hair roots and interior hair strands be consistent with actual distribution. The tracing hair strands can be confined to the hair volume according to the third field. And the fourth field makes the growing direction of one point at a strand be compatible with its predecessor. To generate these fields, we construct high-confidence 3D strand segments from the orientation field of point cloud and 2D traced strands. Hair strands automatically grow from uniformly distributed hair roots according to the hybrid orientation field. We use energy minimization strategy to optimize the entire 3D hair model. We demonstrate that our approach can preserve structural details of 3D hair models.  相似文献   

20.
Mesh geometry can be used to model both object shape and details. If texture maps are involved, it is common to let mesh geometry mainly model object shapes and let the texture maps model the most object details, optimising data size and complexity of an object. To support efficient object rendering and transmission, model simplification can be applied to reduce the modelling data. However, existing methods do not well consider how object features are jointly represented by mesh geometry and texture maps, having problems in identifying and preserving important features for simplified objects. To address this, we propose a visual saliency detection method for simplifying textured 3D models. We produce good simplification results by jointly processing mesh geometry and texture map to produce a unified saliency map for identifying visually important object features. Results show that our method offers a better object rendering quality than existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号