首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most real-world networks are both dynamic and multivariate in nature, meaning that the network is associated with various attributes and both the network structure and attributes evolve over time. Visualizing dynamic multivariate networks is of great significance to the visualization community because of their wide applications across multiple domains. However, it remains challenging because the techniques should focus on representing the network structure, attributes and their evolution concurrently. Many real-world network analysis tasks require the concurrent usage of the three aspects of the dynamic multivariate networks. In this paper, we analyze current techniques and present a taxonomy to classify the existing visualization techniques based on three aspects: temporal encoding, topology encoding, and attribute encoding. Finally, we survey application areas and evaluation methods; and discuss challenges for future research.  相似文献   

2.
Machine learning (ML) models are nowadays used in complex applications in various domains, such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.  相似文献   

3.
Over the past years, an increasing number of publications in information visualization, especially within the field of visual analytics, have mentioned the term “embedding” when describing the computational approach. Within this context, embeddings are usually (relatively) low-dimensional, distributed representations of various data types (such as texts or graphs), and since they have proven to be extremely useful for a variety of data analysis tasks across various disciplines and fields, they have become widely used. Existing visualization approaches aim to either support exploration and interpretation of the embedding space through visual representation and interaction, or aim to use embeddings as part of the computational pipeline for addressing downstream analytical tasks. To the best of our knowledge, this is the first survey that takes a detailed look at embedding methods through the lens of visual analytics, and the purpose of our survey article is to provide a systematic overview of the state of the art within the emerging field of embedding visualization. We design a categorization scheme for our approach, analyze the current research frontier based on peer-reviewed publications, and discuss existing trends, challenges, and potential research directions for using embeddings in the context of visual analytics. Furthermore, we provide an interactive survey browser for the collected and categorized survey data, which currently includes 122 entries that appeared between 2007 and 2023.  相似文献   

4.
Modern visualization software and programming libraries have made data visualization construction easier for everyone. However, the extent of accessibility design they support for blind and low-vision people is relatively unknown. It is also unclear how they can improve chart content accessibility beyond conventional alternative text and data tables. To address these issues, we examined the current accessibility features in popular visualization tools, revealing limited support for the standard accessibility methods and scarce support for chart content exploration. Next, we investigate two promising accessibility approaches that provide off-the-shelf solutions for chart content accessibility: structured navigation and conversational interaction. We present a comparative evaluation study and discuss what to consider when incorporating them into visualization tools.  相似文献   

5.
Design studies are an integral method of visualization research with hundreds of instances in the literature. Although taught as a theory, the practical implementation of design studies is often excluded from visualization pedagogy due to the lengthy time commitments associated with such studies. Recent research has addressed this challenge and developed an expedited design study framework, the Design Study “Lite” Methodology (DSLM), which can implement design studies with novice students within just 14 weeks. The framework was developed and evaluated based on five semesters of in-person data visualization courses with 30 students or less and was implemented in conjunction with Service-Learning (S-L). With the growth and popularity of the data visualization field—and the teaching environment created by the COVID-19 pandemic—more academic institutions are offering visualization courses online. Therefore, in this paper, we strengthen and validate the epistemological foundations of the DSLM framework by testing its (1) adaptability to online learning environments and conditions and (2) scalability to larger classes with up to 57 students. We present two online implementations of the DSLM framework, with and without Service-Learning (S-L), to test the adaptability and scalability of the framework. We further demonstrate that the framework can be applied effectively without the S-L component. We reflect on our experience with the online DSLM implementations and contribute a detailed retrospective analysis using thematic analysis and grounded theory methods to draw valuable recommendations and guidelines for future applications of the framework. This work verifies that DSLM can be used successfully in online classes to teach design study methodology. Finally, we contribute novel additions to the DSLM framework to further enhance it for teaching and learning design studies in the classroom. The preprint and supplementary materials for this paper can be found at https://osf.io/6bjx5/.  相似文献   

6.
Retrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing example-based chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop aprototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification as well as multi-modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.  相似文献   

7.
Targeted attacks on IT systems are a rising threat to the confidentiality of sensitive data and the availability of critical systems. The emergence of Advanced Persistent Threats (APTs) made it paramount to fully understand the particulars of such attacks in order to improve or devise effective defense mechanisms. Grammar inference paired with visual analytics (VA) techniques offers a powerful foundation for the automated extraction of behavioral patterns from sequential event traces. To facilitate the interpretation and analysis of APTs, we present SEQUIN, a grammar inference system based on the Sequitur compression algorithm that constructs a context-free grammar (CFG) from string-based input data. In addition to recursive rule extraction, we expanded the procedure through automated assessment routines capable of dealing with multiple input sources and types. This automated assessment enables the accurate identification of interesting frequent or anomalous patterns in sequential corpora of arbitrary quantity and origin. On the formal side, we extended the CFG with attributes that help describe the extracted (malicious) actions. Discovery-focused pattern visualization of the output is provided by our dedicated KAMAS VA prototype.  相似文献   

8.
FacetAtlas: multifaceted visualization for rich text corpora   总被引:1,自引:0,他引:1  
Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.  相似文献   

9.
Visualization of sentiments and opinions extracted from or annotated in texts has become a prominent topic of research over the last decade. From basic pie and bar charts used to illustrate customer reviews to extensive visual analytics systems involving novel representations, sentiment visualization techniques have evolved to deal with complex multidimensional data sets, including temporal, relational and geospatial aspects. This contribution presents a survey of sentiment visualization techniques based on a detailed categorization. We describe the background of sentiment analysis, introduce a categorization for sentiment visualization techniques that includes 7 groups with 35 categories in total, and discuss 132 techniques from peer‐reviewed publications together with an interactive web‐based survey browser. Finally, we discuss insights and opportunities for further research in sentiment visualization. We expect this survey to be useful for visualization researchers whose interests include sentiment or other aspects of text data as well as researchers and practitioners from other disciplines in search of efficient visualization techniques applicable to their tasks and data.  相似文献   

10.
We present a computational technique that aids with the design of structurally-sound metal frames, tailored for robotic fabrication using an existing process that integrate automated bar bending, welding, and cutting. Aligning frames with structurally-favorable orientations, and decomposing models into fabricable units, we make the fabrication process scale-invariant, and frames globally align in an aesthetically-pleasing and structurally-informed manner. Relying on standard analysis of frames, we then co-optimize the shape and topology of bars at the local unit level. At this level, we minimize combinations of functional and aesthetic objectives under strict fabrication constraints that model the assembly of discrete sets of bent bars. We demonstrate the capabilities of our global-to-local approach on four robotically-constructed examples.  相似文献   

11.
Handling emergencies requires efficient and effective collaboration of medical professionals. To analyze their performance, in an application study, we have developed VisCoMET, a visual analytics approach displaying interactions of healthcare personnel in a triage training of a mass casualty incident. The application scenario stems from social interaction research, where the collaboration of teams is studied from different perspectives. We integrate recorded annotations from multiple sources, such as recorded videos of the sessions, transcribed communication, and eye-tracking information. For each session, an information-rich timeline visualizes events across these different channels, specifically highlighting interactions between the team members. We provide algorithmic support to identify frequent event patterns and to search for user-defined event sequences. Comparing different teams, an overview visualization aggregates each training session in a visual glyph as a node, connected to similar sessions through edges. An application example shows the usage of the approach in the comparative analysis of triage training sessions, where multiple teams encountered the same scene, and highlights discovered insights. The approach was evaluated through feedback from visualization and social interaction experts. The results show that the approach supports reflecting on teams' performance by exploratory analysis of collaboration behavior while particularly enabling the comparison of triage training sessions.  相似文献   

12.
Many text collections with temporal references, such as news corpora and weblogs, are generated to report and discuss real life events. Thus, event-related tasks, such as detecting real life events that drive the generation of the text documents, tracking event evolutions, and investigating reports and commentaries about events of interest, are important when exploring such text collections. To incorporate and leverage human efforts in conducting such tasks, we propose a novel visual analytics approach named EventRiver. EventRiver integrates event-based automated text analysis and visualization to reveal the events motivating the text generation and the long term stories they construct. On the visualization, users can interactively conduct tasks such as event browsing, tracking, association, and investigation. A working prototype of EventRiver has been implemented for exploring news corpora. A set of case studies, experiments, and a preliminary user test have been conducted to evaluate its effectiveness and efficiency.  相似文献   

13.
In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video‐based graphics and video visualization. We provide a review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g. feature extraction, detection, tracking and so on) have been featured in video‐based modelling and rendering pipelines for graphics and visualization.  相似文献   

14.
Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also, segmenting is a necessary step, but frequently, trained models are very sensitive to the particulars of the segmentation that was used when the model was trained. Therefore, in prior published research on text classification in a CSCL context, the data was segmented by hand. We discuss work towards overcoming these challenges. We present a framework for developing coding schemes optimized for automatic segmentation and context-independent coding that builds on this segmentation. The key idea is to extract the semantic and syntactic features of each single word by using the techniques of part-of-speech tagging and named-entity recognition before the raw data can be segmented and classified. Our results show that the coding on the micro-argumentation dimension can be fully automated. Finally, we discuss how fully automated analysis can enable context-sensitive support for collaborative learning.  相似文献   

15.
How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high-profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time-consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer-review process, using a guided, interactive visualization approach. In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities. We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data.  相似文献   

16.
In the past few years, advances in graphics hardware have fuelled an explosion of research and development in the field of interactive and real-time rendering in screen space. Following this trend, a rapidly increasing number of applications rely on multifragment rendering solutions to develop visually convincing graphics applications with dynamic content. The main advantage of these approaches is that they encompass additional rasterised geometry, by retaining more information from the fragment sampling domain, thus augmenting the visibility determination stage. With this survey, we provide an overview of and insight into the extensive, yet active research and respective literature on multifragment rendering. We formally present the multifragment rendering pipeline, clearly identifying the construction strategies, the core image operation categories and their mapping to the respective applications. We describe features and trade-offs for each class of techniques, pointing out GPU optimisations and limitations and provide practical recommendations for choosing an appropriate method for each application. Finally, we offer fruitful context for discussion by outlining some existing problems and challenges as well as by presenting opportunities for impactful future research directions.  相似文献   

17.
In order to discuss the kinds of reasoning a visualization supports and the conclusions that can be drawn within the analysis context, a theoretical framework is needed that enables a formal treatment of the reasoning process. Such a model needs to encompass three stages of the visualization pipeline: encoding, decoding and interpretation. The encoding details how data are transformed into a visualization and what can be seen in the visualization. The decoding explains how humans construct graphical contexts inside the depicted visualization and how they interpret them assigning meaning to displayed structures according to a formal reasoning strategy. In the presented model, we adapt and combine theories for the different steps into a unified formal framework such that the analysis process is modelled as an assignment of meaning to displayed structures according to a formal reasoning strategy. Additionally, we propose the ConceptGraph, a combined graph-based representation of the finite-state transducers resulting from the three stages, that can be used to formalize and understand the reasoning process. We apply the new model to several visualization types and investigate reasoning strategies for various tasks.  相似文献   

18.
Training data plays an essential role in modern applications of machine learning. However, gathering labeled training data is time-consuming. Therefore, labeling is often outsourced to less experienced users, or completely automated. This can introduce errors, which compromise valuable training data, and lead to suboptimal training results. We thus propose a novel approach that uses the power of pretrained classifiers to visually guide users to noisy labels, and let them interactively check error candidates, to iteratively improve the training data set. To systematically investigate training data, we propose a categorization of labeling errors into three different types, based on an analysis of potential pitfalls in label acquisition processes. For each of these types, we present approaches to detect, reason about, and resolve error candidates, as we propose measures and visual guidance techniques to support machine learning users. Our approach has been used to spot errors in well-known machine learning benchmark data sets, and we tested its usability during a user evaluation. While initially developed for images, the techniques presented in this paper are independent of the classification algorithm, and can also be extended to many other types of training data.  相似文献   

19.
Multi‐Light Image Collections (MLICs), i.e., stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination, provide large amounts of visual and geometric information. In this survey, we provide an up‐to‐date integrative view of MLICs as a mean to gain insight on objects through the analysis and visualization of the acquired data. After a general overview of MLICs capturing and storage, we focus on the main approaches to produce representations usable for visualization and analysis. In this context, we first discuss methods for direct exploration of the raw data. We then summarize approaches that strive to emphasize shape and material details by fusing all acquisitions in a single enhanced image. Subsequently, we focus on approaches that produce relightable images through intermediate representations. This can be done both by fitting various analytic forms of the light transform function, or by locally estimating the parameters of physically plausible models of shape and reflectance and using them for visualization and analysis. We finally review techniques that improve object understanding by using illustrative approaches to enhance relightable models, or by extracting features and derived maps. We also review how these methods are applied in several, main application domains, and what are the available tools to perform MLIC visualization and analysis. We finally point out relevant research issues, analyze research trends, and offer guidelines for practical applications.  相似文献   

20.
Breast perfusion data are dynamic medical image data that depict perfusion characteristics of the investigated tissue. These data consist of a series of static datasets that are acquired at different time points and aggregated into time intensity curves (TICs) for each voxel. The characteristics of these TICs provide important information about a lesion's composition, but their analysis is time-consuming due to their large number. Subsequently, these TICs are used to classify a lesion as benign or malignant. This lesion scoring is commonly done manually by physicians and may therefore be subject to bias. We propose an approach that addresses both of these problems by combining an automated lesion classification with a visual confirmatory analysis, especially for uncertain cases. Firstly, we cluster the TICs of a lesion using ordering points to identify the clustering structure (OPTICS) and then visualize these clusters. Together with their relative size, they are added to a library. We then model fuzzy inference rules by using the lesion's TIC clusters as antecedents and its score as consequent. Using a fuzzy scoring system, we can suggest a score for a new lesion. Secondly, to allow physicians to confirm the suggestion in uncertain cases, we display the TIC clusters together with their spatial distribution and allow them to compare two lesions side by side. With our knowledge-assisted comparative visual analysis, physicians can explore and classify breast lesions. The true positive prediction accuracy of our scoring system achieved 71.4 % in one-fold cross-validation using 14 lesions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号