首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Modern computer graphics applications usually require high resolution object models for realistic rendering.However,it is expensive and difficult to deform such models in real time.In order to reduce the computational cost during deformations,a dense model is often manipulated through a simplified structure,called cage,which envelops the model.However,cages are usually built interactively by users,which is tedious and time-consuming.In this paper,we introduce a novel method that can build cages automatically for both 2D polygons and 3D triangular meshes.The method consists of two steps:1) simplifying the input model with quadric error metrics and quadratic programming to build a coarse cage;2) removing the self-intersections of the coarse cage with Delaunay partitions.With this new method,a user can build a cage to envelop an input model either entirely or partially with the approximate vertex number the user specifies.Experimental results show that,compared to other cage building methods with the same number of vertex,cages built by our method are more similar to the input models.Thus,the dense models can be manipulated with higher accuracy through our cages.  相似文献   

2.
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.  相似文献   

3.
4.
5.
This paper describes the FACT system for knowledge discovery fromtext. It discovers associations—patterns ofco-occurrence—amongst keywords labeling the items in a collection oftextual documents. In addition, when background knowledge is available aboutthe keywords labeling the documents FACT is able to use this information inits discovery process. FACT takes a query-centered view of knowledgediscovery, in which a discovery request is viewed as a query over theimplicit set of possible results supported by a collection of documents, andwhere background knowledge is used to specify constraints on the desiredresults of this query process. Execution of a knowledge-discovery query isstructured so that these background-knowledge constraints can be exploitedin the search for possible results. Finally, rather than requiring a user tospecify an explicit query expression in the knowledge-discovery querylanguage, FACT presents the user with a simple-to-use graphical interface tothe query language, with the language providing a well-defined semantics forthe discovery actions performed by a user through the interface.  相似文献   

6.
To facilitate business collaboration and interoperation among enterprises, it is critical to discover and reuse appropriate business processes modeled in different languages and stored in different repositories. However, the formats of business process models are very different, which makes it a challenge to fuse them in a unified way without changing their original representations and semantics. To solve this problem, this paper uses semantic interoperability technique which is able to transform heterogeneous process models into uniform registered items. Based on the general and unambiguous metamodel for process model registration (PMR for short) in ISO/IEC 19763-5 that we proposed before, in this article, we provide a generic process model registration framework for registering heterogeneous business process models to facilitate semantic discovery of business processes across enterprises, and promote process interoperation and business collaboration. Considering that Event-driven Process Chain (EPC) is a popular process model widely used in the industry, we focus on the mapping rules and related specific transformation algorithms from EPC to PMR as an instantiation of our framework and develop an automatic process model registration tool for EPC. Moreover, we conduct a series of experiments to verify the correctness and efficiency of our proposed framework by leveraging the real data set of 604 EPCs from SAP.  相似文献   

7.
Single view reconstruction (SVR) is an important approach for 3D shape recovery since many non‐existing buildings and scenes are captured in a single image. Historical photographs are often the most precise source for virtual reconstruction of a damaged cultural heritage. In semi‐automated techniques, that are mainly used under practical situations, the user is the one who recognizes and selects constraints to be used. Hence, the veridicality and the accuracy of the final model partially rely on man‐based decisions. We noticed that users, especially non‐expert users such as cultural heritage professionals, usually do not fully understand the SVR process, which is why they have trouble in decision making while modelling. That often fundamentally affects the quality of the final 3D models. Considering the importance of human performance in SVR approaches, in this paper we offer a solution that can be used to reduce the amount of user errors. Specifically, we address the problem of locating the centre of projection (CP). We introduce a tool set for 3D visualization of the CP's geometrical loci that provides the user with a clear idea of how the CP's location is determined. Thanks to this type of visualization, the user becomes aware of the following: (1) the constraint relevant for CP location, (2) the image suitable for SVR, (3) more constraints for CP location required, (4) which constraints should be used for the best match, (5) will additional constraints create a useful redundancy. In order to test our approach and the assumptions it relies on, we compared the amount of user made errors in the standard approaches with the one in which additional visualization is provided.  相似文献   

8.
Discovering knowledge from data means finding useful patterns in data, this process has increased the opportunity and challenge for businesses in the big data era. Meanwhile, improving the quality of the discovered knowledge is important for making correct decisions in an unpredictable environment. Various models have been developed in the past; however, few used both data quality and prior knowledge to control the quality of the discovery processes and results. In this paper, a multi-objective model of knowledge discovery in databases is developed, which aids the discovery process by utilizing prior process knowledge and different measures of data quality. To illustrate the model, association rule mining is considered and formulated as a multi-objective problem that takes into account data quality measures and prior process knowledge instead of a single objective problem. Measures such as confidence, support, comprehensibility and interestingness are used. A Pareto-based integrated multi-objective Artificial Bee Colony (IMOABC) algorithm is developed to solve the problem. Using well-known and publicly available databases, experiments are carried out to compare the performance of IMOABC with NSGA-II, MOPSO and Apriori algorithms, respectively. The computational results show that IMOABC outperforms NSGA-II, MOPSO and Apriori on different measures and it could be easily customized or tailored to be in line with user requirements and still generates high-quality association rules.  相似文献   

9.
Existing tools for scientific modeling offer little support for improving models in response to data, whereas computational methods for scientific knowledge discovery provide few opportunities for user input. In this paper, we present a language for stating process models and background knowledge in terms familiar to scientists, along with an interactive environment for knowledge discovery that lets the user construct, edit, and visualize scientific models, use them to make predictions, and revise them to better fit available data. We report initial studies in three domains that illustrate the operation of this environment and the results of a user study carried out with domain scientists. Finally, we discuss related efforts on model formalisms and revision and suggest priorities for additional research.  相似文献   

10.
Visualization workflows are important services for expert users to analyze watersheds when using our HydroTerre end-to-end workflows. Analysis is an interactive and iterative process and we demonstrate that the expert user can focus on model results, not data preparation, by using a web application to rapidly create, tune, and calibrate hydrological models anywhere in the continental USA (CONUS). The HydroTerre system captures user interaction for provenance and reproducibility to share modeling strategies with modelers. Our end-to-end workflow consists of four workflows. The first is data workflows using Essential Terrestrial Variables (ETV) data sets that we demonstrated to construct watershed models anywhere in the CONUS (Leonard and Duffy, 2013). The second is data-model workflows that transform the data workflow results to model inputs. The model inputs are consumed in the third workflow, model workflows (Leonard and Duffy, 2014a) that handle distribution of data and model within High Performance Computing (HPC) environments. This article focuses on our fourth workflow, visualization workflows, which consume the first three workflows to form an end-to-end system to create and share hydrological model results efficiently for analysis and peer review. We show how visualization workflows are incorporated into the HydroTerre infrastructure design and demonstrate the efficiency and robustness for an expert modeler to produce, analyze, and share new hydrological models using CONUS national datasets.  相似文献   

11.
Physically based rendering is a well‐understood technique to produce realistic‐looking images. However, different algorithms exist for efficiency reasons, which work well in certain cases but fail or produce rendering artefacts in others. Few tools allow a user to gain insight into the algorithmic processes. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering. It consists of an interactive parallel coordinates plot, with a built‐in sampling‐based data reduction technique to visualize the attributes associated with each light sample. Two‐dimensional (2D) and three‐dimensional (3D) heat maps depict any desired property of the rendering process. An interactively rendered 3D view of the scene displays animated light paths based on the user's selection to gain further insight into the rendering process. The provided interactivity enables the user to guide the rendering process for more efficiency. To show its usefulness, we present several applications based on our tool. This includes differential light transport visualization to optimize light setup in a scene, finding the causes of and resolving rendering artefacts, such as fireflies, as well as a path length contribution histogram to evaluate the efficiency of different Monte Carlo estimators.  相似文献   

12.
Microblog as one kind of typical social media has many research implications in social event discovery and social-media-based e-learning and collaborative learning. At present, researchers usually employ feature-based classification approaches to detect social events in microblogs. However, it is very common to get different results when different features are used in event discovery. Therefore, it has been a critical issue how to select appropriate features for event discovery in microblogs. In this paper, we analyze five different feature selection methods and present an improved method for selecting features for microblog-based event discovery. We compare all the methods on a real microblog dataset in terms of various metrics including precision, recall, and F-measure. And finally we discuss the best feature selection method for the event discovery in microblogs. To the best of our knowledge, there are no such comparative studies on feature selection for event discovery in social media, and this paper is expected to offer some useful references for the future research and applications on the event discovery in microblogs.  相似文献   

13.
Business process modeling is an essential task in business process management. Process models that are comprehensively understood by business stakeholders allow organizations to profit from this field. In this work, we report what is being investigated in the topic “visualization of business process models”, since visualization is known as improving perception and comprehension of structures and patterns in datasets. We performed a systematic literature review through which we selected and analyzed 46 papers from two points of view. Firstly, we observed the similarities between the papers regarding their main scope. From this observation we classified the papers into six categories: “Augmentation of existing elements”, “Creation of new elements”, “Exploration of the 3D space”, “Information visualization”, “Visual feedback concerning problems detected in process models” and “Perspectives”. The less explored categories and which could represent research challenges for further exploration are “Visual feedback” and “Information visualization”. Secondly, we analyzed the papers based on a well-known visualization analysis framework, which allowed us to obtain a high-level point of view of the proposals presented in the literature and could identify that few authors explore user interaction features in their works. Besides that, we also found that exactly half of the papers base their proposals on BPMN and present results from evaluation or validation. Since BPMN is an ISO standard and there are many tools based on BPMN, there should be more research intending to improve the knowledge around this topic. We expect that our results inspire researchers for further work aiming at bringing forward the field of business process model visualization, to have the advantages of information visualization helping the tasks of business process modeling and management.  相似文献   

14.
Exploring data using visualization systems has been shown to be an extremely powerful technique. However, one of the challenges with such systems is an inability to completely support the knowledge discovery process. More than simply looking at data, users will make a semipermanent record of their visualizations by printing out a hard copy. Subsequently, users will mark and annotate these static representations, either for dissemination purposes or to augment their personal memory of what was witnessed. In this paper, we present a model for recording the history of user explorations in visualization environments, augmented with the capability for users to annotate their explorations. A prototype system is used to demonstrate how this provenance information can be recalled and shared. The prototype system generates interactive visualizations of the provenance data using a spatio-temporal technique. Beyond the technical details of our model and prototype, results from a controlled experiment that explores how different history mechanisms impact problem solving in visualization environments are presented  相似文献   

15.
Fast and robust generation of city-scale seamless 3D urban models   总被引:1,自引:0,他引:1  
Since the introduction of the concept of “Digital Earth”, almost every major international city has been re-constructed in the virtual world. A large volume of geometric models describing urban objects has become freely available in the public domain via software like ArcGlobe and Google Earth. Although mostly created for visualization, these urban models can benefit many applications beyond visualization including city scale evacuation planning and earth phenomenon simulations. However, these models are mostly loosely structured and implicitly defined and require tedious manual preparation that usually takes weeks if not months before they can be used. Designing algorithms that can robustly and efficiently handle unstructured urban models at the city scale becomes a main technical challenge. In this paper, we present a framework that generates seamless 3D architectural models from 2D ground plans with elevation and height information. These overlapping ground plans are commonly used in the current GIS software such as ESRI ArcGIS and urban model synthesis methods to depict various components of buildings. Due to measurement and manual errors, these ground plans usually contain small, sharp, and various (nearly) degenerate artifacts. In this paper, we show both theoretically and empirically that our framework is efficient and numerically stable. Based on our review of the related work, we believe this is the first work that attempts to automatically create 3D architectural meshes for simulation at the city level. With the goal of providing greater benefit beyond visualization from this large volume of urban models, our initial results are encouraging.  相似文献   

16.
To attack the problem of handling increasingly vast stores of information, we discuss a new approach to data exploration that requires the close coupling of man and machine. We call this approach discovery visualization to emphasize the importance of visual display and interaction. This approach aims to discover new relations, new features, and new knowledge. A key element in discovery visualization lies in heightening the machine's awareness of users so they have, for example, focus-based manipulation, based on where and how closely they look at the displayed scene, in addition to direct manipulation. This process makes no sense unless the machine can respond immediately. Further, we promote the concept of continuous interaction with constant feedback between man and machine, and constant unfolding of the data. Finally, automated response must combine with user selection to achieve and sustain animated action, even in data sets of great or varying complexity  相似文献   

17.
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine‐tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer‐based approach for visibility specification is valuable and effective for both, scientific and educational purposes.  相似文献   

18.
Evolutionary multi objective optimization for rule mining: a review   总被引:1,自引:0,他引:1  
Evolutionary multi objective optimization (EMOO) systems are evolutionary systems which are used for optimizing various measures of the evolving system. Rule mining has gained attention in the knowledge discovery literature. The problem of discovering rules with specific properties is treated as a multi objective optimization problem. The objectives to be optimized being the metrics like accuracy, comprehensibility, surprisingness, novelty to name a few. There are a variety of EMOO algorithms in the literature. The performance of these EMOO algorithms is influenced by various characteristics including evolutionary technique used, chromosome representation, parameters like population size, number of generations, crossover rate, mutation rate, stopping criteria, Reproduction operators used, objectives taken for optimization, the fitness function used, optimization strategy, the type of data, number of class attributes and the area of application. This study reviews EMOO systems taking the above criteria into consideration. There are other hybridization strategies like use of intelligent agents, fuzzification, meta data and meta heuristics, parallelization, interactiveness with the user, visualization, etc., which further enhance the performance and usability of the system. Genetic Algorithms (GAs) and Genetic Programming (GPs) are two widely used evolutionary strategies for rule knowledge discovery in Data mining. Thus the proposed study aims at studying the various characteristics of the EMOO systems taking into consideration the two evolutionary strategies of Genetic Algorithm and Genetic programming.  相似文献   

19.
20.
In diverse and self-governed multiple clouds context, the service management and discovery are greatly challenged by the dynamic and evolving features of services. How to manage the features of cloud services and support accurate and efficient service discovery has become an open problem in the area of cloud computing. This paper proposes a field model of multiple cloud services and corresponding service discovery method to address the issue. Different from existing researches, our approach is inspired by Bohr atom model. We use the abstraction of energy level and jumping mechanism to describe services status and variations, and thereby to support the service demarcation and discovery. The contributions of this paper are threefold. First, we propose the abstraction of service energy level to represent the status of services, and service jumping mechanism to investigate the dynamic and evolving features as the variations and re-demarcation of cloud services according to their energy levels. Second, we present user acceptable service region to describe the services satisfying users’ requests and corresponding service discovery method, which can significantly decrease services search scope and improve the speed and precision of service discovery. Third, a series of algorithms are designed to implement the generation of field model, user acceptable service regions, service jumping mechanism, and user-oriented service discovery.We have conducted an extensive experiments on QWS dataset to validate and evaluate our proposed models and algorithms. The results show that field model can well support the representation of dynamic and evolving aspects of services in multiple clouds context and the algorithms can improve the accuracy and efficiency of service discovery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号