首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper advocates a novel method for modelling physically realistic flow from captured incompressible gas sequence via modal analysis in frequency‐constrained subspace. Our analytical tool is uniquely founded upon empirical mode decomposition (EMD) and modal reduction for fluids, which are seamlessly integrated towards a powerful, style‐controllable flow modelling approach. We first extend EMD, which is capable of processing 1D time series but has shown inadequacies for 3D graphics earlier, to fit gas flows in 3D. Next, frequency components from EMD are adopted as candidate vectors for bases of modal reduction. The prerequisite parameters of the Navier–Stokes equations are then optimized to inversely model the physically realistic flow in the frequency‐constrained subspace. The estimated parameters can be utilized for re‐simulation, or be altered toward fluid editing. Our novel inverse‐modelling technique produces real‐time gas sequences after precomputation, and is convenient to couple with other methods for visual enhancement and/or special visual effects. We integrate our new modelling tool with a state‐of‐the‐art fluid capturing approach, forming a complete pipeline from real‐world fluid to flow re‐simulation and editing for various graphics applications.  相似文献   

2.
This paper presents a novel progressive modelling algorithm for 3D models to generate progressive meshes. We propose a forest clustering simplification method to generate a progressive mesh of a model with the efficient and smooth transitions between meshes at different resolutions. Our approach can also integrate and balance the appearance attributes to preserve features of a model in the simplification process. We have applied our progressive modelling technique to several different kinds of input models and results show that our approach only generates efficient and smooth progressive meshes of a given model, but also preserves the features. The proposed method is very suitable for progressive transmission and real‐time rendering of 3D models in networked virtual environments. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

3.
Reducing power consumption has been an essential requirement for Cloud resource providers not only to decrease operating costs, but also to improve the system reliability. As Cloud computing becomes emergent for the Anything as a Service (XaaS) paradigm, modern real‐time services also become available through Cloud computing. In this work, we investigate power‐aware provisioning of virtual machines for real‐time services. Our approach is (i) to model a real‐time service as a real‐time virtual machine request; and (ii) to provision virtual machines in Cloud data centers using dynamic voltage frequency scaling schemes. We propose several schemes to reduce power consumption by hard real‐time services and power‐aware profitable provisioning of soft real‐time services. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Given the strong increase in regulatory requirements for business processes the management of business process compliance becomes a more and more regarded field in IS research. Several methods have been developed to support compliance checking of conceptual models. However, their focus on distinct modeling languages and mostly linear (i.e., predecessor-successor related) compliance rules may hinder widespread adoption and application in practice. Furthermore, hardly any of them has been evaluated in a real-world setting. We address this issue by applying a generic pattern matching approach for conceptual models to business process compliance checking in the financial sector. It consists of a model query language, a search algorithm and a corresponding modelling tool prototype. It is (1) applicable for all graph-based conceptual modeling languages and (2) for different kinds of compliance rules. Furthermore, based on an applicability check, we (3) evaluate the approach in a financial industry project setting against its relevance for decision support of audit and compliance management tasks.  相似文献   

5.
Time-series analysis is a powerful technique to discover patterns and trends in temporal data. However, the lack of a conceptual model for this data-mining technique forces analysts to deal with unstructured data. These data are represented at a low-level of abstraction and their management is expensive. Most analysts face up to two main problems: (i) the cleansing of the huge amount of potentially-analysable data and (ii) the correct definition of the data-mining algorithms to be employed. Owing to the fact that analysts’ interests are also hidden in this scenario, it is not only difficult to prepare data, but also to discover which data is the most promising. Since their appearance, data warehouses have, therefore, proved to be a powerful repository of historical data for data-mining purposes. Moreover, their foundational modelling paradigm, such as, multidimensional modelling, is very similar to the problem domain. In this article, we propose a unified modelling language (UML) extension through UML profiles for data-mining. Specifically, the UML profile presented allows us to specify time-series analysis on top of the multidimensional models of data warehouses. Our extension provides analysts with an intuitive notation for time-series analysis which is independent of any specific data-mining tool or algorithm. In order to show its feasibility and ease of use, we apply it to the analysis of fish-captures in Alicante. We believe that a coherent conceptual modelling framework for data-mining assures a better and easier knowledge-discovery process on top of data warehouses.  相似文献   

6.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

7.
This paper introduces a model‐driven approach to the design of collaborative Web‐based applications, i.e. applications in which several users play different roles, in a collaborative way, to pursue a specific goal. The paper illustrates a conference management application (CMA), whose main requirements include: (i) the management of users profiles and access rights based on the role played by users during the conference life cycle; (ii) the delivery of information and services to individual users; (iii) the management of the sequence of activities that lead to the achievement of a common goal. The presented approach is based on WebML, a conceptual modelling language for the Web. The paper also highlights some general properties—as understood by the practical experience of CMA development—that a Web modelling language should feature in order to fully support the development of collaborative applications. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
本文以标记有序树作为半结构化数据的数据模型 ,研究了半结构化数据的树状最大频繁模式挖掘问题 .已有挖掘算法通常挖掘所有频繁模式 ,其中很多模式为其它模式的子模式 ,针对该问题 ,设计实现了一种最大模式挖掘算法 .该算法采用最右扩展枚举方法无重复枚举所有候选模式 ,利用频繁模式扩展森林实现高效剪枝扩展和挖掘频繁叶模式 ,通过计算频繁叶模式间的包含关系挖掘树状最大频繁模式 .试验结果表明该算法具有良好性能  相似文献   

9.
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space‐efficient semi‐procedural representation of the terrain and vegetation supporting high‐quality real‐time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low‐resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre‐compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre‐computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.  相似文献   

10.
When monitoring safety levels in deep pit foundations using sensors, anomalies (e.g., highly correlated variables) and noise (e.g., high dimensionality) exist in the extracted time series data, impacting the ability to assess risks. Our research aims to address the following question: How can we detect anomalies and de-noise monitoring data from sensors in real time to improve its quality and use it to assess geotechnical safety risks? In addressing this research question, we develop a hybrid smart data approach that integrates Extended Isolation Forest and Variational Mode Decomposition models to detect anomalies and de-noise data effectively. We use real-life data obtained from sensors to validate our smart data approach while constructing a deep pit foundation. Our smart data approach can detect anomalies with a root mean square error and signal-to-noise ratio of 0.0389 and 24.09, respectively. To this end, our smart data approach can effectively pre-process data enabling improved decision-making and the management of safety risks.  相似文献   

11.
Successful data warehouse (DW) design needs to be based upon a requirement analysis phase in order to adequately represent the information needs of DW users. Moreover, since the DW integrates the information provided by data sources, it is also crucial to take these sources into account throughout the development process to obtain a consistent reconciliation of data sources and information needs. In this paper, we start by summarizing our approach to specify user requirements for data warehouses and to obtain a conceptual multidimensional model capturing these requirements. Then, we make use of the multidimensional normal forms to define a set of Query/View/Transformation (QVT) relations to assure that the conceptual multidimensional model obtained from user requirements agrees with the available data sources that will populate the DW. Thus, we propose a hybrid approach to develop DWs, i.e., we firstly obtain the conceptual multidimensional model of the DW from user requirements and then we verify and enforce its correctness against data sources by using a set of QVT relations based on multidimensional normal forms. Finally, we provide some snapshots of the CASE tool we have used to implement our QVT relations.  相似文献   

12.
We present some basic concepts of a modelling environment for data integration in business analytics. Main emphasis is on defining a process model for the different activities occurring in connection with data integration, which allow later on assessment of the quality of the data. The model is based on combination of knowledge and techniques from statistical metadata management and from workflow processes. The modelling concepts are presented in a problem oriented formulation. The approach is embedded into an open model framework which aims for a modelling platform for all kinds of models useful in business applications.  相似文献   

13.
We propose an efficient approach for interactive visualization of massive models with CPU ray tracing. A voxel‐based hierarchical level‐of‐detail (LOD) framework is employed to minimize rendering time and required system memory. In a pre‐processing phase, a compressed out‐of‐core data structure is constructed, which contains the original primitives of the model and the LOD voxels, organized into a kd‐tree. During rendering, data is loaded asynchronously to ensure a smooth inspection of the model regardless of the available I/O bandwidth. With our technique, we are able to explore data sets consisting of hundreds of millions of triangles in real‐time on a desktop PC with a quad‐core CPU.  相似文献   

14.
The ability to accurately achieve performance capture of athlete motion during competitive play in near real‐time promises to revolutionize not only broadcast sports graphics visualization and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non‐intrusive approach for synthesizing human athlete motion in competitive game‐play with lightweight instrumentation of both the athlete and field of play. Our data‐driven puppetry technique relies on a pre‐captured database of short segments of motion capture data to construct a motion graph augmented with interpolated motions and speed variations. An athlete's performed motion is synthesized by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors. We demonstrate the efficacy of our approach in a challenging application scenario, with a high‐performance tennis athlete wearing one or more lightweight body‐worn accelerometers and a single overhead camera providing the athlete's global position and orientation data. However, the approach is flexible in both the number and variety of input sensor data used. The technique can also be adopted for searching a motion graph efficiently in linear time in alternative applications.  相似文献   

15.
Successful data warehouse (DW) design needs to be based upon a requirement analysis phase in order to adequately represent the information needs of DW users. Moreover, since the DW integrates the information provided by data sources, it is also crucial to take these sources into account throughout the development process to obtain a consistent reconciliation of data sources and information needs. In this paper, we start by summarizing our approach to specify user requirements for data warehouses and to obtain a conceptual multidimensional model capturing these requirements. Then, we make use of the multidimensional normal forms to define a set of Query/View/Transformation (QVT) relations to assure that the conceptual multidimensional model obtained from user requirements agrees with the available data sources that will populate the DW. Thus, we propose a hybrid approach to develop DWs, i.e., we firstly obtain the conceptual multidimensional model of the DW from user requirements and then we verify and enforce its correctness against data sources by using a set of QVT relations based on multidimensional normal forms. Finally, we provide some snapshots of the CASE tool we have used to implement our QVT relations.  相似文献   

16.
An increasing number of large-scale applications exploit peer-to-peer network architecture to provide highly scalable and flexible services. Among these applications, data management in peer-to-peer systems is one of the interesting domains. In this paper, we investigate the multidimensional skyline computation problem on a structured peer-to-peer network. In order to achieve low communication cost and quick response time, we utilize the iMinMax(theta ) method to transform high-dimensional data to one-dimensional value and distribute the data in a structured peer-to-peer network called BATON. Thereafter, we propose a progressive algorithm with adaptive filter technique for efficient skyline computation in this environment. We further discuss some optimization techniques for the algorithm, and summarize the key principles of our algorithm into a query routing protocol with detailed analysis. Finally, we conduct an extensive experimental evaluation to demonstrate the efficiency of our approach.  相似文献   

17.
The generation of inbetween frames that interpolate a given set of key frames is a major component in the production of a 2D feature animation. Our objective is to considerably reduce the cost of the inbetweening phase by offering an intuitive and effective interactive environment that automates inbetweening when possible while allowing the artist to guide, complement, or override the results. Tight inbetweens, which interpolate similar key frames, are particularly time‐consuming and tedious to draw. Therefore, we focus on automating these high‐precision and expensive portions of the process. We have designed a set of user‐guided semi‐automatic techniques that fit well with current practice and minimize the number of required artist‐gestures. We present a novel technique for stroke interpolation from only two keys which combines a stroke motion constructed from logarithmic spiral vertex trajectories with a stroke deformation based on curvature averaging and twisting warps. We discuss our system in the context of a feature animation production environment and evaluate our approach with real production data.  相似文献   

18.
We present an Eulerian method for the real‐time simulation of intrinsic fluid dynamics effects on deforming surfaces. Our method is based on a novel semi‐Lagrangian closest point method for the solution of partial differential equations on animated triangle meshes. We describe this method and demonstrate its use to compute and visualize flow and wave propagation along such meshes at high resolution and speed. Underlying our technique is the efficient conversion of an animated triangle mesh into a time‐dependent implicit representation based on closest surface points. The proposed technique is unconditionally stable with respect to the surface deformation and, in contrast to comparable Lagrangian techniques, its precision does not depend on the level of detail of the surface triangulation.  相似文献   

19.
We present a new real‐time approach to simulate deformable objects using a learnt statistical model to achieve a high degree of realism. Our approach improves upon state‐of‐the‐art interactive shape‐matching meshless simulation methods by not only capturing important nuances of an object's kinematics but also of its dynamic texture variation. We are able to achieve this in an automated pipeline from data capture to simulation. Our system allows for the capture of idiosyncratic characteristics of an object's dynamics which for many simulations (e.g. facial animation) is essential. We allow for the plausible simulation of mechanically complex objects without knowledge of their inner workings. The main idea of our approach is to use a flexible statistical model to achieve a geometrically‐driven simulation that allows for arbitrarily complex yet easily learned deformations while at the same time preserving the desirable properties (stability, speed and memory efficiency) of current shape‐matching simulation systems. The principal advantage of our approach is the ease with which a pseudo‐mechanical model can be learned from 3D scanner data to yield realistic animation. We present examples of non‐trivial biomechanical objects simulated on a desktop machine in real‐time, demonstrating superior realism over current geometrically motivated simulation techniques.  相似文献   

20.
半结构化数据查询重写   总被引:10,自引:1,他引:10  
查询重写是数据库研究的一个基本问题,它和查询优化,数据仓库,信息集成,语义缓存等问题紧密相关,目前Internet上存在海量的半结构化数据,在信息集成过程中产生了大量半结构化视图,如何利用物化半结构化视图来重写用户查询,减少响应时间成为研究热点问题,上述问题本质上是NP问题,提出了一种半结构化查询重写的新方法,该方法在保证算法正确性和完备性的基础上,利用半结构化数据特点和查询子目标之间的关系,减少了指数空间的查询重写候选方案生成,理论分析表明,它极大地降低了算法的代价。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号