首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Retrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing example-based chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop aprototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification as well as multi-modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.  相似文献   

2.
Analyzing stenoses of the internal carotids – local constrictions of the artery – is a critical clinical task in cardiovascular disease treatment and prevention. For this purpose, we propose a self-contained pipeline for the visual analysis of carotid artery geometries. The only inputs are computed tomography angiography (CTA) scans, which are already recorded in clinical routine. We show how integrated model extraction and visualization can help to efficiently detect stenoses and we provide means for automatic, highly accurate stenosis degree computation. We directly connect multiple sophisticated processing stages, including a neural prediction network for lumen and plaque segmentation and automatic global diameter computation. We enable interactive and retrospective user control over the processing stages. Our aims are to increase user trust by making the underlying data validatable on the fly, to decrease adoption costs by minimizing external dependencies, and to optimize scalability by streamlining the data processing. We use interactive visualizations for data inspection and adaption to guide the user through the processing stages. The framework was developed and evaluated in close collaboration with radiologists and neurologists. It has been used to extract and analyze over 100 carotid bifurcation geometries and is built with a modular architecture, available as an extendable open-source platform.  相似文献   

3.
We present a methodology for making sense of the communicative role of data visualizations in journalistic storytelling and share findings from surveying water-related data stories. Data stories are a genre of long-form journalism that integrate text, data visualization, and other visual expressions (e.g., photographs, illustrations, videos) for the purpose of data-driven storytelling. In the last decade, a considerable number of data stories about a wide range of topics have been published worldwide. Authors use a variety of techniques to make complex phenomena comprehensible and use visualizations as communicative devices that shape the understanding of a given topic. Despite the popularity of data stories, we, as scholars, still lack a methodological framework for assessing the communicative role of visualizations in data stories. To this extent, we draw from data journalism, visual culture, and multimodality studies to propose an interpretative framework in six stages. The process begins with the analysis of content blocks and framing elements and ends with the identification of dimensions, patterns, and relationships between textual and visual elements. The framework is put to the test by analyzing 17 data stories about water-related issues. Our observations from the survey illustrate how data visualizations can shape the framing of complex topics.  相似文献   

4.
Machine learning (ML) models are nowadays used in complex applications in various domains, such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.  相似文献   

5.
Over the past years, an increasing number of publications in information visualization, especially within the field of visual analytics, have mentioned the term “embedding” when describing the computational approach. Within this context, embeddings are usually (relatively) low-dimensional, distributed representations of various data types (such as texts or graphs), and since they have proven to be extremely useful for a variety of data analysis tasks across various disciplines and fields, they have become widely used. Existing visualization approaches aim to either support exploration and interpretation of the embedding space through visual representation and interaction, or aim to use embeddings as part of the computational pipeline for addressing downstream analytical tasks. To the best of our knowledge, this is the first survey that takes a detailed look at embedding methods through the lens of visual analytics, and the purpose of our survey article is to provide a systematic overview of the state of the art within the emerging field of embedding visualization. We design a categorization scheme for our approach, analyze the current research frontier based on peer-reviewed publications, and discuss existing trends, challenges, and potential research directions for using embeddings in the context of visual analytics. Furthermore, we provide an interactive survey browser for the collected and categorized survey data, which currently includes 122 entries that appeared between 2007 and 2023.  相似文献   

6.
Modern visualization software and programming libraries have made data visualization construction easier for everyone. However, the extent of accessibility design they support for blind and low-vision people is relatively unknown. It is also unclear how they can improve chart content accessibility beyond conventional alternative text and data tables. To address these issues, we examined the current accessibility features in popular visualization tools, revealing limited support for the standard accessibility methods and scarce support for chart content exploration. Next, we investigate two promising accessibility approaches that provide off-the-shelf solutions for chart content accessibility: structured navigation and conversational interaction. We present a comparative evaluation study and discuss what to consider when incorporating them into visualization tools.  相似文献   

7.
ParaDime is a framework for parametric dimensionality reduction (DR). In parametric DR, neural networks are trained to embed high-dimensional data items in a low-dimensional space while minimizing an objective function. ParaDime builds on the idea that the objective functions of several modern DR techniques result from transformed inter-item relationships. It provides a common interface for specifying these relations and transformations and for defining how they are used within the losses that govern the training process. Through this interface, ParaDime unifies parametric versions of DR techniques such as metric MDS, t-SNE, and UMAP. It allows users to fully customize all aspects of the DR process. We show how this ease of customization makes ParaDime suitable for experimenting with interesting techniques such as hybrid classification/embedding models and supervised DR. This way, ParaDime opens up new possibilities for visualizing high-dimensional data.  相似文献   

8.
Task-incremental learning (Task-IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learning tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under-explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in-depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time.  相似文献   

9.
In this paper, we present an integrated visual analytics approach to support the parametrization and exploration of flow visualization based on the finite-time Lyapunov exponent. Such visualization of time-dependent flow faces various challenges, including the choice of appropriate advection times, temporal regions of interest, and spatial resolution. Our approach eases these challenges by providing the user with context by means of parametric aggregations, with support and guidance for a more directed exploration, and with a set of derived measures for better qualitative assessment. We demonstrate the utility of our approach with examples from computation fluid dynamics and time-dependent dynamical systems.  相似文献   

10.
Design studies are an integral method of visualization research with hundreds of instances in the literature. Although taught as a theory, the practical implementation of design studies is often excluded from visualization pedagogy due to the lengthy time commitments associated with such studies. Recent research has addressed this challenge and developed an expedited design study framework, the Design Study “Lite” Methodology (DSLM), which can implement design studies with novice students within just 14 weeks. The framework was developed and evaluated based on five semesters of in-person data visualization courses with 30 students or less and was implemented in conjunction with Service-Learning (S-L). With the growth and popularity of the data visualization field—and the teaching environment created by the COVID-19 pandemic—more academic institutions are offering visualization courses online. Therefore, in this paper, we strengthen and validate the epistemological foundations of the DSLM framework by testing its (1) adaptability to online learning environments and conditions and (2) scalability to larger classes with up to 57 students. We present two online implementations of the DSLM framework, with and without Service-Learning (S-L), to test the adaptability and scalability of the framework. We further demonstrate that the framework can be applied effectively without the S-L component. We reflect on our experience with the online DSLM implementations and contribute a detailed retrospective analysis using thematic analysis and grounded theory methods to draw valuable recommendations and guidelines for future applications of the framework. This work verifies that DSLM can be used successfully in online classes to teach design study methodology. Finally, we contribute novel additions to the DSLM framework to further enhance it for teaching and learning design studies in the classroom. The preprint and supplementary materials for this paper can be found at https://osf.io/6bjx5/.  相似文献   

11.
How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high-profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time-consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer-review process, using a guided, interactive visualization approach. In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities. We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data.  相似文献   

12.
We propose a novel visualization technique for graphs that are attributed with scalar data. In many scenarios, these attributes (e.g., birth date in a family network) provide ambient context information for the graph structure, whose consideration is important for different visual graph analysis tasks. Graph attributes are usually conveyed using different visual representations (e.g., color, size, shape) or by reordering the graph structure according to the attribute domain (e.g., timelines). While visual encodings allow graphs to be arranged in a readable layout, assessing contextual information such as the relative similarities of attributes across the graph is often cumbersome. In contrast, attribute-based graph reordering serves the comparison task of attributes, but typically strongly impairs the readability of the structural information given by the graph's topology. In this work, we augment force-directed node-link diagrams with a continuous ambient representation of the attribute context. This way, we provide a consistent overview of the graph's topological structure as well as its attributes, supporting a wide range of graph-related analysis tasks. We resort to an intuitive height field metaphor, illustrated by a topographic map rendering using contour lines and suitable color maps. Contour lines visually connect nodes of similar attribute values, and depict their relative arrangement within the global context. Moreover, our contextual representation supports visualizing attribute value ranges associated with graph nodes (e.g., lifespans in a family network) as trajectories routed through this height field. We discuss how user interaction with both the structural and the contextual information fosters exploratory graph analysis tasks. The effectiveness and versatility of our technique is confirmed in a user study and case studies from various application domains.  相似文献   

13.
Realistic rendering using discrete reflectance measurements is challenging, because arbitrary directions on the light and view hemispheres are queried at render time, incurring large memory requirements and the need for interpolation. This explains the desire for compact and continuously parametrized models akin to analytic BRDFs; however, fitting BRDF parameters to complex data such as BTF texels can prove challenging, as models tend to describe restricted function spaces that cannot encompass real-world behavior. Recent advances in this area have increasingly relied on neural representations that are trained to reproduce acquired reflectance data. The associated training process is extremely costly and must typically be repeated for each material. Inspired by autoencoders, we propose a unified network architecture that is trained on a variety of materials, and which projects reflectance measurements to a shared latent parameter space. Similarly to SVBRDF fitting, real-world materials are represented by parameter maps, and the decoder network is analog to the analytic BRDF expression (also parametrized on light and view directions for practical rendering application). With this approach, encoding and decoding materials becomes a simple matter of evaluating the network. We train and validate on BTF datasets of the University of Bonn, but there are no prerequisites on either the number of angular reflectance samples, or the sample positions. Additionally, we show that the latent space is well-behaved and can be sampled from, for applications such as mipmapping and texture synthesis.  相似文献   

14.
Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.  相似文献   

15.
The study of vascular structures, using medical 3D models, is an active field of research. Illustrative visualizations have been applied to this domain in multiple ways. Researchers made the geometric properties of vasculature more comprehensive and augmented the surface with representations of multivariate clinical data. Techniques that head beyond the application of colour-maps or simple shading approaches require a surface parameterization, that is, texture coordinates, in order to overcome locality. When extracting 3D models, the computation of texture coordinates on the mesh is not always part of the data processing pipeline. We combine existing techniques to a simple parameterization approach that is suitable for tree-like structures. The parameterization is done w.r.t. to a pre-defined source vertex. For this, we present an automatic algorithm, that detects the tree root. The parameterization is partly done in screen-space and recomputed per frame. However, the screen-space computation comes with positive features that are not present in object-space approaches. We show how the resulting texture coordinates can be used for varying hatching, contour parameterization, display of decals, as additional depth cues and feature extraction. A further post-processing step based on parameterization allows for a segmentation of the structure and visualization of its tree topology.  相似文献   

16.
Autonomous unmanned aerial vehicles are complex systems of hardware, software, and human input. Understanding this complexity is key to their development and operation. Information visualizations already exist for exploring flight logs but comprehensive analyses currently require several disparate and custom tools. This design study helps address the pain points faced by autonomous unmanned aerial vehicle developers and operators. We contribute: a spiral development process model for grounded evaluation visualization development focused on progressively broadening target user involvement and refining user goals; a demonstration of the model as part of developing a deployed and adopted visualization system; a data and task abstraction for developers and operators performing post-flight analysis of autonomous unmanned aerial vehicle logs; the design and implementation of Data Comets , an open-source and web-based interactive visualization tool for post-flight log analysis incorporating temporal, geospatial, and multivariate data; and the results of a summative evaluation of the visualization system and our abstractions based on in-the-wild usage. A free copy of this paper and source code are available at osf.io/h4p7g  相似文献   

17.
The fast development of novel approaches derived from the Transformers architecture has led to outstanding performance in different scenarios, from Natural Language Processing to Computer Vision. Recently, they achieved impressive results even in the challenging task of non-rigid shape matching. However, little is known about the capability of the Transformer-encoder architecture for the shape matching task, and its performances still remained largely unexplored. In this paper, we step back and investigate the contribution made by the Transformer-encoder architecture compared to its more recent alternatives, focusing on why and how it works on this specific task. Thanks to the versatility of our implementation, we can harness the bi-directional structure of the correspondence problem, making it more interpretable. Furthermore, we prove that positional encodings are essential for processing unordered point clouds. Through a comprehensive set of experiments, we find that attention and positional encoding are (almost) all you need for shape matching. The simple Transformer-encoder architecture, coupled with relative position encoding in the attention mechanism, is able to obtain strong improvements, reaching the current state-of-the-art.  相似文献   

18.
Interactive visualization tools are being used by an increasing number of members of the general public; however, little is known about how, and how well, people use visualizations to infer causality. Adapted from the mediation causal model, we designed an analytic framework to systematically evaluate human performance, strategies, and pitfalls in a visual causal reasoning task. We recruited 24 participants and asked them to identify the mediators in a fictitious dataset using bar charts and scatter plots within our visualization interface. The results showed that the accuracy of their responses as to whether a variable is a mediator significantly decreased when a confounding variable directly influenced the variable being analyzed. Further analysis demonstrated how individual visualization exploration strategies and interfaces might influence reasoning performance. We also identified common strategies and pitfalls in their causal reasoning processes. Design implications for how future visual analytics tools can be designed to better support causal inference are discussed.  相似文献   

19.
A model of legal reasoning with cases incorporating theories and values   总被引:4,自引:0,他引:4  
Reasoning with cases has been a primary focus of those working in AI and law who have attempted to model legal reasoning. In this paper we put forward a formal model of reasoning with cases which captures many of the insights from that previous work. We begin by stating our view of reasoning with cases as a process of constructing, evaluating and applying a theory. Central to our model is a view of the relationship between cases, rules based on cases, and the social values which justify those rules. Having given our view of these relationships, we present our formal model of them, and explain how theories can be constructed, compared and evaluated. We then show how previous work can be described in terms of our model, and discuss extensions to the basic model to accommodate particular features of previous work. We conclude by identifying some directions for future work.  相似文献   

20.
Data visualizations have been widely used on mobile devices like smartphones for various tasks (e.g., visualizing personal health and financial data), making it convenient for people to view such data anytime and anywhere. However, others nearby can also easily peek at the visualizations, resulting in personal data disclosure. In this paper, we propose a perception-driven approach to transform mobile data visualizations into privacy-preserving ones. Specifically, based on human visual perception, we develop a masking scheme to adjust the spatial frequency and luminance contrast of colored visualizations. The resulting visualization retains its original information in close proximity but reduces visibility when viewed from a certain distance or farther away. We conducted two user studies to inform the design of our approach (N=16) and systematically evaluate its performance (N=18), respectively. The results demonstrate the effectiveness of our approach in terms of privacy preservation for mobile data visualizations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号