首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases. We further propose a novel approach to interactive editing of shape morphing according to the shape distribution. The user can explore the morphing path and select example models intuitively and adjust the path with simple interactions to edit the morphing sequences. This provides a useful tool to allow users to generate desired morphing with little effort. We demonstrate the effectiveness of our approach using various examples.  相似文献   

2.
We present a 3‐D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank‐and‐vote‐and‐combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top‐ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.  相似文献   

3.
A high‐level data fusion system that uses Bayesian statistics involving weights‐of‐evidence modelling is described to combine disparate information from airborne digital data such as digital surface model (DSM), colour, thermal infrared (TIR) and hyperspectral images at different time periods. To determine the efficacy of the system, an analysis of change detection was performed. The data fusion system is capable of detecting changes in man‐made features automatically in a densely populated area where there is little prior information. Multiclass segmented images were obtained from the data captured by four airborne remote sensing sensors. The system performs data fusion modelling by using binary images of each theme class and a total of 40 binary patterns were obtained. Through Bayesian methods, involving weights‐of‐evidence modelling, all the binary images were analysed and finally four binary patterns (indicator images) were identified automatically as significant for the change‐detection application. A weighted index overlay model available in the system combines these four patterns. Data fusion by weights‐of‐evidence modelling is found to be straightforward and unequivocal for predicting newly transformed locations. The results of the Bayesian method are accurate as the weights are based on statistical analysis. Changes in features such as colour of roofs, parking areas, openland areas, newly built structures, and the presence or absence of vehicles are extracted automatically by using the high‐level data fusion approach. The final predictor image shows the probability of change‐detected areas in a densely populated city in Japan.  相似文献   

4.
Planar Shape Detection and Regularization in Tandem   总被引:1,自引:0,他引:1       下载免费PDF全文
We present a method for planar shape detection and regularization from raw point sets. The geometric modelling and processing of man‐made environments from measurement data often relies upon robust detection of planar primitive shapes. In addition, the detection and reinforcement of regularities between planar parts is a means to increase resilience to missing or defect‐laden data as well as to reduce the complexity of models and algorithms down the modelling pipeline. The main novelty behind our method is to perform detection and regularization in tandem. We first sample a sparse set of seeds uniformly on the input point set, and then perform in parallel shape detection through region growing, interleaved with regularization through detection and reinforcement of regular relationships (coplanar, parallel and orthogonal). In addition to addressing the end goal of regularization, such reinforcement also improves data fitting and provides guidance for clustering small parts into larger planar parts. We evaluate our approach against a wide range of inputs and under four criteria: geometric fidelity, coverage, regularity and running times. Our approach compares well with available implementations such as the efficient random sample consensus–based approach proposed by Schnabel and co‐authors in 2007.  相似文献   

5.
In order to explore the most current information and react faster to changing business conditions, organizations consider real‐time data warehousing a powerful technique to achieve operational business intelligence (BI). We propose in this paper a novel real‐time data warehouse (RTDW) framework based on the virtualization concept. Our approach introduces a conceptual modelling technique, known as ring modelling, for real‐time data management and multidimensional analysis. This technique produces a flexible semi‐structured data model that accommodates unknown business process data and relationships as they evolve, handles schema changes and aggregate‐management efficiently, and scales well with the large size of increasing data volumes. With the help of a telecommunication business example, We evaluated our proposed approach in an extensive experimental study where we compared our approach Ring Model with existing structured multidimensional conceptual models (MCMs), i.e. relational OLAP and multidimensional OLAP, and with semi‐structured MCM, i.e. XML Cubes, in terms of scalability, data storage estimations, data updates loading time, and query response times. Our performance results show that encouraging speedups are achieved.  相似文献   

6.
Modelling trees according to desired shapes is important for many applications. Despite numerous methods having been proposed in tree modelling, it is still a non‐trivial task and challenging. In this paper, we present a new variational computing approach for generating realistic trees in specific shapes. Instead of directly modelling trees from symbolic rules, we formulate the tree modelling as an optimization process, in which a variational cost function is iteratively minimized. This cost function measures the difference between the guidance shape and the target tree crown. In addition, to faithfully capture the branch structure of trees, several botanical factors, including the minimum total branches volume and spatial branches patterns, are considered in the optimization to guide the tree modelling process. We demonstrate that our approach is applicable to generate trees with different shapes, from interactive design and complex polygonal meshes.  相似文献   

7.
We review methods designed to compute correspondences between geometric shapes represented by triangle meshes, contours or point sets. This survey is motivated in part by recent developments in space–time registration, where one seeks a correspondence between non‐rigid and time‐varying surfaces, and semantic shape analysis, which underlines a recent trend to incorporate shape understanding into the analysis pipeline. Establishing a meaningful correspondence between shapes is often difficult because it generally requires an understanding of the structure of the shapes at both the local and global levels, and sometimes the functionality of the shape parts as well. Despite its inherent complexity, shape correspondence is a recurrent problem and an essential component of numerous geometry processing applications. In this survey, we discuss the different forms of the correspondence problem and review the main solution methods, aided by several classification criteria arising from the problem definition. The main categories of classification are defined in terms of the input and output representation, objective function and solution approach. We conclude the survey by discussing open problems and future perspectives.  相似文献   

8.
Today’s product designer is being asked to develop high quality, innovative products at an ever increasing pace. To meet this need, an intensive search is underway for advanced design methodologies that facilitate the acquisition of design knowledge and creative ideas for later reuse. Additionally, designers are embracing a wide range of 3D digital design applications, such as 3D digitization, 3D CAD and CAID, reverse engineering (RE), CAE analysis and rapid prototyping (RP). In this paper, we propose a reverse engineering innovative design methodology called Reverse Innovative Design (RID). The RID methodology facilitates design and knowledge reuse by leveraging 3D digital design applications. The core of our RID methodology is the definition and construction of feature-based parametric solid models from scanned data. The solid model is constructed with feature data to allow for design modification and iteration. Such a construction is well suited for downstream analysis and rapid prototyping. In this paper, we will review the commercial availability and technological developments of some relevant 3D digital design applications. We will then introduce three RE modelling strategies: an autosurfacing strategy for organic shapes; a solid modelling strategy with feature recognition and surface fitting for analytical models; and a curve-based modelling strategy for accurate reverse modelling. Freeform shapes are appearing with more frequency in product development. Since their “natural” parameters are hard to define and extract, we propose construction of a feature skeleton based upon industrial or regional standards or by user interaction. Global and local product definition parameters are then linked to the feature skeleton. Design modification is performed by solving a constrained optimization problem. A RID platform has been developed and the main RE strategies and core algorithms have been integrated into SolidWorks as an add-in product called ScanTo3D. We will use this system to demonstrate our RID methodology on a collection of innovative consumer product design examples.  相似文献   

9.
Polygon meshes with 3‐valent vertices often occur as the frame of free‐form surfaces in architecture, in which rigid beams are connected in rigid joints. For modelling such meshes, it is desirable to measure the deformation of the joints' shapes. We show that it is natural to represent joint shapes as points in hyperbolic 3‐space. This endows the space of joint shapes with a geometric structure that facilitates computation. We use this structure to optimize meshes towards different constraints, and we believe that it will be useful for other applications as well.  相似文献   

10.
Mining large amounts of unstructured data for extracting meaningful, accurate, and actionable information, is at the core of a variety of research disciplines including computer science, mathematical and statistical modelling, as well as knowledge engineering. In particular, the ability to model complex scenarios based on unstructured datasets is an important step towards an integrated and accurate knowledge extraction approach. This would provide a significant insight in any decision making process driven by Big Data analysis activities. However, there are multiple challenges that need to be fully addressed in order to achieve this, especially when large and unstructured data sets are considered.In this article we propose and analyse a novel method to extract and build fragments of Bayesian networks (BNs) from unstructured large data sources. The results of our analysis show the potential of our approach, and highlight its accuracy and efficiency. More specifically, when compared with existing approaches, our method addresses specific challenges posed by the automated extraction of BNs with extensive applications to unstructured and highly dynamic data sources.The aim of this work is to advance the current state-of-the-art approaches to the automated extraction of BNs from unstructured datasets, which provide a versatile and powerful modelling framework to facilitate knowledge discovery in complex decision scenarios.  相似文献   

11.
The term Big Data denotes huge-volume, complex, rapid growing datasets with numerous, autonomous and independent sources. In these new circumstances Big Data bring many attractive opportunities; however, good opportunities are always followed by challenges, such as modelling, new paradigms, novel architectures that require original approaches to address data complexities. The purpose of this special issue on Modeling and Management of Big Data is to discuss research and experience in modelling and to develop as well as deploy systems and techniques to deal with Big Data. A summary of the selected papers is presented, followed by a conceptual modelling proposal for Big Data. Big Data creates new requirements based on complexities in data capture, data storage, data analysis and data visualization. These concerns are discussed in detail in this study and proposals are recommended for specific areas of future research.  相似文献   

12.
This paper deals with the reconstruction of three-dimensional (3D) geometric shapes based on observed noisy 3D measurements and multiple coupled nonlinear shape constraints. Here a shape could be a complete object, a portion of an object, a part of a building etc. The paper suggests a general incremental framework whereby constraints can be added and integrated in the model reconstruction process, resulting in an optimal trade-off between minimization of the shape fitting error and the constraint tolerances. After defining sets of main constraints for objects containing planar and quadric surfaces, the paper shows that our scheme is well behaved and the approach is valid through application on different real parts. This work is the first to give such a large framework for the integration of numerical geometric relationships in object modeling from range data. The technique is expected to have a great impact in reverse engineering applications and manufactured object modeling where the majority of parts are designed with intended feature relationships.  相似文献   

13.
We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification.  相似文献   

14.
This study presents a literature analysis using a semiautomated text mining and topic modelling approach of the body of knowledge encompassed in 17 years (2000–2016) of literature published in the Wiley's Expert Systems journal, a key reference in Expert Systems (ESs) research, in a total of 488 research articles. The methodological approach included analysing countries from authors' affiliations, with results emphasizing the relevance of both U.S. and U.K. researchers, with Chinese, Turkish, and Spanish holding also a significant relevance. As a result of the sparsity found on the keywords, one of our goals became to devise a taxonomy for future submissions under 2 core dimensions: ESs' methods and ESs' applications. Finally, through topic modelling, data‐driven methods were unveiled as the most relevant, pairing with evaluation methods in its application to managerial sciences, arts, and humanities. Findings also show that most of the application domains are well represented, including health, engineering, energy, and social sciences.  相似文献   

15.
16.
《Ergonomics》2012,55(10):1714-1725
A statistical body shape model (SBSM) for children was developed for generating a child body shape with desired anthropometric parameters. A standardised template mesh was fit to whole-body laser scan data from 137 children aged 3–11 years. The mesh coordinates along with a set of surface landmarks and 27 manually measured anthropometric variables were analysed using principal component (PC) analysis. PC scores were associated with anthropometric predictors such as stature, body mass index (BMI) and ratio of erect sitting height to stature (SHS) using a regression model. When the original scan data were compared with the predictions of the SBSM using each subject's stature, BMI and SHS, the mean absolute error was 10.4 ± 5.8 mm, and 95th percentile error was 24.0 ± 18.5 mm. The model, publicly available online, will have utility for a wide range of applications.

Practitioner Summary: A statistical body shape model for children helps to account for inter-individual variability in body shapes as well as anthropometric dimensions. This parametric modelling approach is useful for reliable prediction of the body shape of a specific child with a few given predictors such as stature, body mass index and age.  相似文献   

17.
18.
Data analysis often involves finding models that can explain patterns in data, and reduce possibly large data sets to more compact model‐based representations. In Statistics, many methods are available to compute model information. Among others, regression models are widely used to explain data. However, regression analysis typically searches for the best model based on the global distribution of data. On the other hand, a data set may be partitioned into subsets, each requiring individual models. While automatic data subsetting methods exist, these often require parameters or domain knowledge to work with. We propose a system for visual‐interactive regression analysis for scatter plot data, supporting both global and local regression modeling. We introduce a novel regression lens concept, allowing a user to interactively select a portion of data, on which regression analysis is run in interactive time. The lens gives encompassing visual feedback on the quality of candidate models as it is interactively navigated across the input data. While our regression lens can be used for fully interactive modeling, we also provide user guidance suggesting appropriate models and data subsets, by means of regression quality scores. We show, by means of use cases, that our regression lens is an effective tool for user‐driven regression modeling and supports model understanding.  相似文献   

19.
Automatic synthesis of high quality 3D shapes is an ongoing and challenging area of research. While several data‐driven methods have been proposed that make use of neural networks to generate 3D shapes, none of them reach the level of quality that deep learning synthesis approaches for images provide. In this work we present a method for a convolutional point cloud decoder/generator that makes use of recent advances in the domain of image synthesis. Namely, we use Adaptive Instance Normalization and offer an intuition on why it can improve training. Furthermore, we propose extensions to the minimization of the commonly used Chamfer distance for auto‐encoding point clouds. In addition, we show that careful sampling is important both for the input geometry and in our point cloud generation process to improve results. The results are evaluated in an auto‐encoding setup to offer both qualitative and quantitative analysis. The proposed decoder is validated by an extensive ablation study and is able to outperform current state of the art results in a number of experiments. We show the applicability of our method in the fields of point cloud upsampling, single view reconstruction, and shape synthesis.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号