首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Analyzing complex data is a non‐linear process that alternates between identifying discrete facts and developing overall assessments and conclusions. In addition, data analysis rarely occurs in solitude; multiple collaborators can be engaged in the same analysis, or intermediate results can be reported to stakeholders. However, current data‐driven communication tools are detached from the analysis process and promote linear stories that forego the hierarchical and branching nature of data analysis, which leads to either too much or too little detail in the final report. We propose a conceptual design for integrated data‐driven reporting that allows for iterative structuring of insights into hierarchies linked to analytic provenance and chosen analysis views. The hierarchies become dynamic and interactive reports where collaborators can review and modify the analysis at a desired level of detail. Our web‐based Inside Insights system provides interaction techniques to annotate states of analytic components, structure annotations, and link them to appropriate presentation views. We demonstrate the generality and usefulness of our system with two use cases and a qualitative expert review.  相似文献   

2.
The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users.  相似文献   

3.
To complement the currently existing definitions and conceptual frameworks of visual analytics, which focus mainly on activities performed by analysts and types of techniques they use, we attempt to define the expected results of these activities. We argue that the main goal of doing visual analytics is to build a mental and/or formal model of a certain piece of reality reflected in data. The purpose of the model may be to understand, to forecast or to control this piece of reality. Based on this model‐building perspective, we propose a detailed conceptual framework in which the visual analytics process is considered as a goal‐oriented workflow producing a model as a result. We demonstrate how this framework can be used for performing an analytical survey of the visual analytics research field and identifying the directions and areas where further research is needed.  相似文献   

4.
The usage of deep learning models for tagging input data has increased over the past years because of their accuracy and high‐performance. A successful application is to score sleep stages. In this scenario, models are trained to predict the sleep stages of individuals. Although their predictive accuracy is high, there are still mis classifications that prevent doctors from properly diagnosing sleep‐related disorders. This paper presents a system that allows users to explore the output of deep learning models in a real‐life scenario to spot and analyze faulty predictions. These can be corrected by users to generate a sequence of sleep stages to be examined by doctors. Our approach addresses a real‐life scenario with absence of ground truth. It differs from others in that our goal is not to improve the model itself, but to correct the predictions it provides. We demonstrate that our approach is effective in identifying faulty predictions and helping users to fix them in the proposed use case.  相似文献   

5.
Many visual analytics systems allow users to interact with machine learning models towards the goals of data exploration and insight generation on a given dataset. However, in some situations, insights may be less important than the production of an accurate predictive model for future use. In that case, users are more interested in generating of diverse and robust predictive models, verifying their performance on holdout data, and selecting the most suitable model for their usage scenario. In this paper, we consider the concept of Exploratory Model Analysis (EMA), which is defined as the process of discovering and selecting relevant models that can be used to make predictions on a data source. We delineate the differences between EMA and the well‐known term exploratory data analysis in terms of the desired outcome of the analytic process: insights into the data or a set of deployable models. The contributions of this work are a visual analytics system workflow for EMA, a user study, and two use cases validating the effectiveness of the workflow. We found that our system workflow enabled users to generate complex models, to assess them for various qualities, and to select the most relevant model for their task.  相似文献   

6.
7.
Emissive media are often challenging to render: in thin regions where only few scattering events occur the emission is poorly sampled, while sampling events for emission can be disadvantageous due to absorption in dense regions. We extend the standard path space measurement contribution to also collect emission along path segments, not only at vertices. We apply this extension to two estimators: extending paths via scattering and distance sampling, and next event estimation. In order to do so, we unify the two approaches and derive the corresponding Monte Carlo estimators to interpret next event estimation as a solid angle sampling technique. We avoid connecting paths to vertices hidden behind dense absorbing layers of smoke by also including transmittance sampling into next event estimation. We demonstrate the advantages of our line integration approach which generates estimators with lower variance since entire segments are accounted for. Also, our novel forward next event estimation technique yields faster run times compared to previous next event estimation as it penetrates less deeply into dense volumes.  相似文献   

8.
Robust and efficient rendering of complex lighting effects, such as caustics, remains a challenging task. While algorithms like vertex connection and merging can render such effects robustly, their significant overhead over a simple path tracer is not always justified and – as we show in this paper ‐ also not necessary. In current rendering solutions, caustics often require the user to enable a specialized algorithm, usually a photon mapper, and hand‐tune its parameters. But even with carefully chosen parameters, photon mapping may still trace many photons that the path tracer could sample well enough, or, even worse, that are not visible at all. Our goal is robust, yet lightweight, caustics rendering. To that end, we propose a technique to identify and focus computation on the photon paths that offer significant variance reduction over samples from a path tracer. We apply this technique in a rendering solution combining path tracing and photon mapping. The photon emission is automatically guided towards regions where the photons are useful, i.e., provide substantial variance reduction for the currently rendered image. Our method achieves better photon densities with fewer light paths (and thus photons) than emission guiding approaches based on visual importance. In addition, we automatically determine an appropriate number of photons for a given scene, and the algorithm gracefully degenerates to pure path tracing for scenes that do not benefit from photon mapping.  相似文献   

9.
The Curriculum Vitae (CV, also referred to as “résumé”) is an established representation of a person's academic and professional history. A typical CV is comprised of multiple sections associated with spatio‐temporal, nominal, hierarchical, and ordinal data. The main task of a recruiter is, given a job application with specific requirements, to compare and assess CVs in order to build a short list of promising candidates to interview. Commonly, this is done by viewing CVs in a side‐by‐side fashion. This becomes challenging when comparing more than two CVs, because the reader is required to switch attention between them. Furthermore, there is no guarantee that the CVs are structured similarly, thus making the overview cluttered and significantly slowing down the comparison process. In order to address these challenges, in this paper we propose “CV3”, an interactive exploration environment offering users a new way to explore, assess, and compare multiple CVs, to suggest suitable candidates for specific job requirements. We validate our system by means of domain expert feedback whose results highlight both the efficacy of our approach and its limitations. We learned that CV3 eases the overall burden of recruiters thereby assisting them in the selection process.  相似文献   

10.
Monte Carlo methods for physically‐based light transport simulation are broadly adopted in the feature film production, animation and visual effects industries. These methods, however, often result in noisy images and have slow convergence. As such, improving the convergence of Monte Carlo rendering remains an important open problem. Gradient‐domain light transport is a recent family of techniques that can accelerate Monte Carlo rendering by up to an order of magnitude, leveraging a gradient‐based estimation and a reformulation of the rendering problem as an image reconstruction. This state of the art report comprehensively frames the fundamentals of gradient‐domain rendering, as well as the pragmatic details behind practical gradient‐domain uniand bidirectional path tracing and photon density estimation algorithms. Moreover, we discuss the various image reconstruction schemes that are crucial to accurate and stable gradient‐domain rendering. Finally, we benchmark various gradient‐domain techniques against the state‐of‐the‐art in denoising methods before discussing open problems.  相似文献   

11.
We present the Bladder Runner, a novel tool to enable detailed visual exploration and analysis of the impact of bladder shape variation on the accuracy of dose delivery, during the course of prostate cancer radiotherapy (RT). Our tool enables the investigation of individual patients and cohorts through the entire treatment process, and it can give indications of RT‐induced complications for the patient. In prostate cancer RT treatment, despite the design of an initial plan prior to dose administration, bladder toxicity remains very common. The main reason is that the dose is delivered in multiple fractions over a period of weeks, during which, the anatomical variation of the bladder – due to differences in urinary filling – causes deviations between planned and delivered doses. Clinical researchers want to correlate bladder shape variations to dose deviations and toxicity risk through cohort studies, to understand which specific bladder shape characteristics are more prone to side effects. This is currently done with Dose‐Volume Histograms (DVHs), which provide limited, qualitative insight. The effect of bladder variation on dose delivery and the resulting toxicity cannot be currently examined with the DVHs. To address this need, we designed and implemented the Bladder Runner, which incorporates visualization strategies in a highly interactive environment with multiple linked views. Individual patients can be explored and analyzed through the entire treatment period, while inter‐patient and temporal exploration, analysis and comparison are also supported. We demonstrate the applicability of our presented tool with a usage scenario, employing a dataset of 29 patients followed through the course of the treatment, across 13 time points. We conducted an evaluation with three clinical researchers working on the investigation of RT‐induced bladder toxicity. All participants agreed that Bladder Runner provides better understanding and new opportunities for the exploration and analysis of the involved cohort data.  相似文献   

12.
We propose ClustMe, a new visual quality measure to rank monochrome scatterplots based on cluster patterns. ClustMe is based on data collected from a human‐subjects study, in which 34 participants judged synthetically generated cluster patterns in 1000 scatterplots. We generated these patterns by carefully varying the free parameters of a simple Gaussian Mixture Model with two components, and asked the participants to count the number of clusters they could see (1 or more than 1). Based on the results, we form ClustMe by selecting the model that best predicts these human judgments among 7 different state‐of‐the‐art merging techniques (Demp ). To quantitatively evaluate ClustMe, we conducted a second study, in which 31 human subjects ranked 435 pairs of scatterplots of real and synthetic data in terms of cluster patterns complexity. We use this data to compare ClustMe's performance to 4 other state‐of‐the‐art clustering measures, including the well‐known Clumpiness scagnostics. We found that of all measures, ClustMe is in strongest agreement with the human rankings.  相似文献   

13.
Feature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives. VisFM incorporates a series of interactions for exploring the feature data. We demonstrate the visual efficacy of VisFM by applying it to three scenarios. An informal expert feedback, conducted by our collaborator in computer vision, demonstrates how VisFM can be used for comparing and analysing feature matchings when the goal is to improve an image retrieval algorithm.  相似文献   

14.
The stochastic nature of Monte Carlo rendering algorithms inherently produces noisy images. Essentially, three approaches have been developed to solve this issue: improving the ray‐tracing strategies to reduce pixel variance, providing adaptive sampling by increasing the number of rays in regions needing so, and filtering the noisy image as a post‐process. Although the algorithms from the latter category introduce bias, they remain highly attractive as they quickly improve the visual quality of the images, are compatible with all sorts of rendering effects, have a low computational cost and, for some of them, avoid deep modifications of the rendering engine. In this paper, we build upon recent advances in both non‐local and collaborative filtering methods to propose a new efficient denoising operator for Monte Carlo rendering. Starting from the local statistics which emanate from the pixels sample distribution, we enrich the image with local covariance measures and introduce a nonlocal bayesian filter which is specifically designed to address the noise stemming from Monte Carlo rendering. The resulting algorithm only requires the rendering engine to provide for each pixel a histogram and a covariance matrix of its color samples. Compared to state‐of‐the‐art sample‐based methods, we obtain improved denoising results, especially in dark areas, with a large increase in speed and more robustness with respect to the main parameter of the algorithm. We provide a detailed mathematical exposition of our bayesian approach, discuss extensions to multiscale execution, adaptive sampling and animated scenes, and experimentally validate it on a collection of scenes.  相似文献   

15.
Monte‐Carlo path tracing techniques can generate stunning visualizations of medical volumetric data. In a clinical context, such renderings turned out to be valuable for communication, education, and diagnosis. Because a large number of computationally expensive lighting samples is required to converge to a smooth result, progressive rendering is the only option for interactive settings: Low‐sampled, noisy images are shown while the user explores the data, and as soon as the camera is at rest the view is progressively refined. During interaction, the visual quality is low, which strongly impedes the user's experience. Even worse, when a data set is explored in virtual reality, the camera is never at rest, leading to constantly low image quality and strong flickering. In this work we present an approach to bring volumetric Monte‐Carlo path tracing to the interactive domain by reusing samples over time. To this end, we transfer the idea of temporal antialiasing from surface rendering to volume rendering. We show how to reproject volumetric ray samples even though they cannot be pinned to a particular 3D position, present an improved weighting scheme that makes longer history trails possible, and define an error accumulation method that downweights less appropriate older samples. Furthermore, we exploit reprojection information to adaptively determine the number of newly generated path tracing samples for each individual pixel. Our approach is designed for static, medical data with both volumetric and surface‐like structures. It achieves good‐quality volumetric Monte‐Carlo renderings with only little noise, and is also usable in a VR context.  相似文献   

16.
Visualizing and extracting three‐dimensional features is important for many computational science applications, each with their own feature definitions and data types. While some are simple to state and implement (e.g. isosurfaces), others require more complicated mathematics (e.g. multiple derivatives, curvature, eigenvectors, etc.). Correctly implementing mathematical definitions is difficult, so experimenting with new features requires substantial investments. Furthermore, traditional interpolants rarely support the necessary derivatives, and approximations can reduce numerical stability. Our new approach directly translates mathematical notation into practical visualization and feature extraction, with minimal mental and implementation overhead. Using a mathematically expressive domain‐specific language, Diderot, we compute direct volume renderings and particle‐based feature samplings for a range of mathematical features. Non‐expert users can experiment with feature definitions without any exposure to meshes, interpolants, derivative computation, etc. We demonstrate high‐quality results on notoriously difficult features, such as ridges and vortex cores, using working code simple enough to be presented in its entirety.  相似文献   

17.
Reminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.  相似文献   

18.
In the field of organic electronics, understanding complex material morphologies and their role in efficient charge transport in solar cells is extremely important. Related processes are studied using the Ising model and Kinetic Monte Carlo simulations resulting in large ensembles of stochastic trajectories. Naive visualization of these trajectories, individually or as a whole, does not lead to new knowledge discovery through exploration. In this paper, we present novel visualization and exploration methods to analyze this complex dynamic data, which provide succinct and meaningful abstractions leading to scientific insights. We propose a morphology abstraction yielding a network composed of material pockets and the interfaces, which serves as backbone for the visualization of the charge diffusion. The trajectory network is created using a novel way of implicitly attracting the trajectories to the skeleton of the morphology relying on a relaxation process. Each individual trajectory is then represented as a connected sequence of nodes in the skeleton. The final network summarizes all of these sequences in a single aggregated network. We apply our method to three different morphologies and demonstrate its suitability for exploring this kind of data.  相似文献   

19.
Parallel simulation codes often suffer from performance bottlenecks due to network congestion, leaving millions of dollars of investments underutilized. Given a network topology, it is critical to understand how different applications, job placements, routing schemes, etc., are affected by and contribute to network congestion, especially for large and complex networks. Understanding and optimizing communication on large‐scale networks is an active area of research. Domain experts often use exploratory tools to develop both intuitive and formal metrics for network health and performance. This paper presents Tree Scope , an interactive, web‐based visualization tool for exploring network traffic on large‐scale fat‐tree networks. Tree Scope encodes the network topology using a tailored matrix‐based representation and provides detailed visualization of all traffic in the network. We report on the design process of Tree Scope , which has been received positively by network researchers as well as system administrators. Through case studies of real and simulated data, we demonstrate how Tree Scope 's visual design and interactive support for complex queries on network traffic can provide experts with new insights into the occurrences and causes of congestion in the network.  相似文献   

20.
Recently, deep learning approaches have proven successful at removing noise from Monte Carlo (MC) rendered images at extremely low sampling rates, e.g., 1–4 samples per pixel (spp). While these methods provide dramatic speedups, they operate on uniformly sampled MC rendered images. However, the full promise of low sample counts requires both adaptive sampling and reconstruction/denoising. Unfortunately, the traditional adaptive sampling techniques fail to handle the cases with low sampling rates, since there is insufficient information to reliably calculate their required features, such as variance and contrast. In this paper, we address this issue by proposing a deep learning approach for joint adaptive sampling and reconstruction of MC rendered images with extremely low sample counts. Our system consists of two convolutional neural networks (CNN), responsible for estimating the sampling map and denoising, separated by a renderer. Specifically, we first render a scene with one spp and then use the first CNN to estimate a sampling map, which is used to distribute three additional samples per pixel on average adaptively. We then filter the resulting render with the second CNN to produce the final denoised image. We train both networks by minimizing the error between the denoised and ground truth images on a set of training scenes. To use backpropagation for training both networks, we propose an approach to effectively compute the gradient of the renderer. We demonstrate that our approach produces better results compared to other sampling techniques. On average, our 4 spp renders are comparable to 6 spp from uniform sampling with deep learning‐based denoising. Therefore, 50% more uniformly distributed samples are required to achieve equal quality without adaptive sampling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号