首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
We present a novel and light‐weight approach to capture and reconstruct structured 3D models of multi‐room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real‐world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane‐sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top‐down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi‐room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real‐world and synthetic models.  相似文献   

3.
Large-sized product cannot be printed as one piece by a 3D printer because of the volume limitation of most 3D printers. Some products with the complex structure and high surface quality should also not be printed into one piece to meet requirement of the printing quality. For increasing the surface quality and reducing support structure of 3D printed models, this paper proposes a 3D model segmentation method based on deep learning. Sub-graphs are generated by pre-segmenting 3D triangular mesh models to extract printing features. A data structure is proposed to design training data sets based on the sub-graphs with printing features of the original 3D model including surface quality, support structure and normal curvature. After training a Stacked Auto-encoder using the training set, a 3D model is pre-segmented to build an application set by the sub-graph data structure. The application set is applied by the trained deep-learning system to generate hidden features. An Affinity Propagation clustering method is introduced in combining hidden features and geometric information of the application set to segment a product model into several parts. In the case study, samples of 3D models are segmented by the proposed method, and then printed using a 3D printer for validating the performance.  相似文献   

4.
目的 近年来3D打印模型的版权标注和保护引起了研究者的关注,为了全面反映3D打印模型版权保护研究的现状和最新进展,本文对国内外公开发表的主要文献进行了梳理和分析。方法 首先在广泛文献调研的基础上,描述3D打印模型文件和3D打印模型攻击类型,分析3D打印和扫描过程对模型的影响。然后对3D打印模型的版权保护策略进行分类,详细阐述每类方法的基本框架和相关技术特征。最后根据相关文献,将3D打印数字水印方法与传统网格水印算法和抗3D打印—扫描攻击的性能进行比较。结果 基于物理特性的3D模型版权保护技术能对3D打印后的模型嵌入有效的具有一定隐蔽性的版权标识,但在提取微结构体等信息时需要借助一些专用设备,不具有普适性。基于数字水印的方法既能抵抗传统的数字模态下的相似性变换、剪切、噪声、细分、量化、光顺等攻击,又能有效抵抗3D模型的数/模等模态转换攻击,并且高精度的3D打印机和扫描仪能有效提高水印的检测率。结论 3D打印产品是设计者和生产企业的智慧和心血的结晶,包含知识产权。随着3D打印在工业领域的广泛应用,3D打印模型版权保护有着广阔的应用前景和研究价值,但目前针对3D打印模型的版权保护的检测和评估机制存在一些局限性,未来需要构建统一的3D打印模型测试库和3D打印模型水印评测体系。  相似文献   

5.
Procedural textile models are compact, easy to edit, and can achieve state‐of‐the‐art realism with fiber‐level details. However, these complex models generally need to be fully instantiated (aka. realized ) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization‐minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber‐level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60–200× less) and achieving good performance.  相似文献   

6.
7.
Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state‐of‐the‐art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research.  相似文献   

8.
The advent of affordable consumer grade RGB‐D cameras has brought about a profound advancement of visual scene reconstruction methods. Both computer graphics and computer vision researchers spend significant effort to develop entirely new algorithms to capture comprehensive shape models of static and dynamic scenes with RGB‐D cameras. This led to significant advances of the state of the art along several dimensions. Some methods achieve very high reconstruction detail, despite limited sensor resolution. Others even achieve real‐time performance, yet possibly at lower quality. New concepts were developed to capture scenes at larger spatial and temporal extent. Other recent algorithms flank shape reconstruction with concurrent material and lighting estimation, even in general scenes and unconstrained conditions. In this state‐of‐the‐art report, we analyze these recent developments in RGB‐D scene reconstruction in detail and review essential related work. We explain, compare, and critically analyze the common underlying algorithmic concepts that enabled these recent advancements. Furthermore, we show how algorithms are designed to best exploit the benefits of RGB‐D data while suppressing their often non‐trivial data distortions. In addition, this report identifies and discusses important open research questions and suggests relevant directions for future work.  相似文献   

9.
We present a method to design the deformation behavior of 3D printed models by an interactive tool, where the variation of bending elasticity at different regions of a model is realized by a change in shell thickness. Given a soft material to be used in 3D printing, we propose an experimental setup to acquire the bending behavior of this material on tubes with different diameters and thicknesses. The relationship between shell thickness and bending elasticity is stored in an echo state network using the acquired dataset. With the help of the network, an interactive design tool is developed to generate non‐uniformly hollowed models to achieve desired bending behaviors. The effectiveness of this method is verified on models fabricated by different 3D printers by studying whether their physical deformation can match the designed target shape.  相似文献   

10.
The analysis of protein‐ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein‐ligand interactions of Molecular Simulation trajectories is greatly facilitated.  相似文献   

11.
Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.  相似文献   

12.
In many 3D applications, building models in polygon-soup representation are commonly used for the purposes of visualization, for example, in movies and games. Their appearances are fine, however geometry-wise, they may have limited information of connectivity and may have internal intersections between their parts. Therefore, they are not well-suited to be directly used in 3D geospatial applications, which usually require geometric analysis. For an input building model in polygon-soup representation, we propose a novel appearance-driven approach to interactively convert it to a two-manifold model, which is more well-suited for 3D geospatial applications. In addition, the level of detail (LOD) can be controlled interactively during the conversion. Because a model in polygon-soup representation is not well-suited for geometric analysis, the main idea of the proposed method is extracting the visual appearance of the input building model and utilizing it to facilitate the conversion and LODs generation. The silhouettes are extracted and used to identify the features of the building. After this, according to the locations of these features, horizontal cross-sections are generated. We then connect two adjacent horizontal cross-sections to reconstruct the building. We control the LOD by processing the features on the silhouettes and horizontal cross-sections using a 2D approach. We also propose facilitating the conversion and LOD control by integrating a variety of rasterization methods. The results of our experiments demonstrate the effectiveness of our method.  相似文献   

13.
By combining semantic scene-graph markups with generative modeling, this framework retains semantic information late in the rendering pipeline. It can thus enhance visualization effects and interactive behavior without compromising interactive frame rates. Large geospatial databases are populated with the results of hundreds of person-years of surveying effort. Utility workers access these databases during fieldwork to help them determine asset location. Real-time rendering engines are highly advanced and optimized software toolkits that interactively display 3D information to users.To connect geospatial databases and rendering engines, we must transcode raw 2D geospatial data into 3D models suitable for standard rendering engines. Thus, transcoding isn't simply a one-to-one conversion from one format to another; we obtain 3D models from 2D information through procedural 3D modeling. Transcoding the geospatial database information's semantic attributes into visual primitives entails information loss. We must therefore find the right point in the pipeline to perform transcoding.  相似文献   

14.
Fused deposition modeling based 3D‐printing is becoming increasingly popular due to it's low‐cost and simple operation and maintenance. While it produces rugged prints made from a wide range of materials, it suffers from an inherent printing limitation where it cannot produce overhanging surfaces of non‐trivial size. This limitation can be handled by constructing temporary support‐structures, however this solution involves additional material costs, longer print time, and often a fair amount of labor in removing it. In this paper we present a new method for partitioning general solid objects into a small number of parts that can be printed with no support. The partitioning is computed by applying a sequence of cutting‐planes that split the object recursively. Unlike existing algorithms, the planes are not chosen at random, rather they are derived from shape analysis routines that identify and resolve various commonly‐found geometric configurations. In addition, we guide this search by a revised set of conditions that both ensure the objects' printability as well as realistically model the printing capabilities of the printer at hand. Evaluation of the new method demonstrates its ability to efficiently obtain support‐free partitionings typically containing fewer parts compared to existing methods that rely on support‐structures.  相似文献   

15.
We present an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems. Based on a new saliency‐based metric, our method optimizes the thicknesses of slicing layers to save printing time and preserve the visual quality of the printing results. We formulate the problem as a constrained ?0 optimization and compute the slicing result via a two‐step optimization scheme. To further reduce printing time, we develop a saliency‐based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately. We validate our method with a large set of 3D shapes ranging from CAD models to scanned objects. Results show that our method saves printing time by 30–40% and generates 3D objects that are visually similar to the ones printed with the finest resolution possible.  相似文献   

16.
Brick elements are very popular and have been widely used in many areas, such as toy design and architectural fields. Designing a vivid brick sculpture to represent a three‐dimensional (3D) model is a very challenging task, which requires professional skills and experience to convey unique visual characteristics. We introduce an automatic system to convert an architectural model into a LEGO sculpture while preserving the original model's shape features. Unlike previous legolization techniques that generate a LEGO sculpture exactly based on the input model's voxel representation, we extract the model's visual features, including repeating components, shape details and planarity. Then, we translate these visual features into the final LEGO sculpture by employing various brick types. We propose a deformation algorithm in order to resolve discrepancies between an input mesh's continuous 3D shape and the discrete positions of bricks in a LEGO sculpture. We evaluate our system on various architectural models and compare our method with previous voxelization‐based methods. The results demonstrate that our approach successfully conveys important visual features from digital models and generates vivid LEGO sculptures.  相似文献   

17.
Most 3D vector field visualization techniques suffer from the problem of visual clutter, and it remains a challenging task to effectively convey both directional and structural information of 3D vector fields. In this paper, we present a novel visualization framework that combines the advantages of clustering methods and illustrative rendering techniques to generate a concise and informative depiction of complex flow structures. Given a 3D vector field, we first generate a number of streamlines covering the important regions based on an entropy measurement. Then we decompose the streamlines into different groups based on a categorization of vector information, wherein the streamline pattern in each group is ensured to be coherent or nearly coherent. For each group, we select a set of representative streamlines and render them in an illustrative fashion to enhance depth cues and succinctly show local flow characteristics. The results demonstrate that our approach can generate a visualization that is relatively free of visual clutter while facilitating perception of salient information of complex vector fields.  相似文献   

18.
Digital Earth is a global reference model for integrating, processing and visualizing geospatial datasets. In this reference model, various data-types, including Digital Elevation Models (DEM) and imagery (orthophotos), are universally and openly available for the entire globe. However, 3D content such as detailed terrains with features, man-made structures, 3D water bodies and 3D vegetation are not commonly available in Digital Earth. In this paper, we present an interactive system for the rapid creation and integration of these types of 3D content to augment Digital Earth. The inputs to our system include available data sources, such as DEM and imagery information depicting landscapes and urban environments. The proposed system employs sketch-based and image-assisted tools to support interactive creation of textured 3D content. For adding terrain features visible in orthophotos, and also the basin of water bodies, we use a multiscale least square surface fitting to generate an adaptive triangular subdivision. For modeling forests and vegetation, we use image-based techniques and take advantage of visible regions and colors of forests in orthophotos. For 3D man-made structures, starting from a single photograph, we provide a simple image-assisted sketching tool to extract these objects, correct for perspective distortion and place them into desired locations.  相似文献   

19.
Examining and manipulating the large volumetric data attract great interest for various applications. For such purpose, we first extend the 2D moving least squares (MLS) technique into 3D, and propose a texture-guided deformation technique for creating visualization styles through interactive manipulations of volumetric models using 3D MLS. Our framework includes focus+context (F+C) visualization for simultaneously showing the entire model after magnification, and the cut-away or illustrative visualization for providing a better understanding of anatomical and biological structures. Both visualization styles are widely applied in the graphics areas. We present a mechanism for defining features using high-dimensional texture information, and design an interface for visualizing, selecting and extracting features/objects of interest. Methods of the interactive or automatic generation of 3D control points are proposed for the flexible and plausible deformation. We describe a GPU-based implementation to achieve real-time performance of the deformation techniques and the manipulation operators. Different from physical deformation models, our framework is goal-oriented and user-guided. We demonstrate the robustness and efficiency of our framework using various volumetric datasets.  相似文献   

20.
Reproducing the appearance of real‐world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out‐of‐gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two‐step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low‐dimensional intuitive appearance space recently proposed by Serrano et al. [ SGM*16 ], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image‐based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute‐preserving material editing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号