首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Market participants and businesses have made tremendous efforts to make the best decisions in a timely manner under varying economic and business circumstances. As such, decision‐making processes based on Financial data have been a popular topic in industries. However, analyzing Financial data is a non‐trivial task due to large volume, diversity and complexity, and this has led to rapid research and development of visualizations and visual analytics systems for Financial data exploration. Often, the development of such systems requires researchers to collaborate with Financial domain experts to better extract requirements and challenges in their tasks. Work to systematically study and gather the task requirements and to acquire an overview of existing visualizations and visual analytics systems that have been applied in Financial domains with respect to real‐world data sets has not been completed. To this end, we perform a comprehensive survey of visualizations and visual analytics. In this work, we categorize Financial systems in terms of data sources, applied automated techniques, visualization techniques, interaction, and evaluation methods. For the categorization and characterization, we utilize existing taxonomies of visualization and interaction. In addition, we present task requirements extracted from interviews with domain experts in order to help researchers design better systems with detailed goals.  相似文献   

2.
Pre‐processing is a prerequisite to conduct effective and efficient downstream data analysis. Pre‐processing pipelines often require multiple routines to address data quality challenges and to bring the data into a usable form. For both the construction and the refinement of pre‐processing pipelines, human‐in‐the‐loop approaches are highly beneficial. This particularly applies to multivariate time series, a complex data type with multiple values developing over time. Due to the high specificity of this domain, it has not been subject to in‐depth research in visual analytics. We present a visual‐interactive approach for preprocessing multivariate time series data with the following aspects. Our approach supports analysts to carry out six core analysis tasks related to pre‐processing of multivariate time series. To support these tasks, we identify requirements to baseline toolkits that may help practitioners in their choice. We characterize the space of visualization designs for uncertainty‐aware pre‐processing and justify our decisions. Two usage scenarios demonstrate applicability of our approach, design choices, and uncertainty visualizations for the six analysis tasks. This work is one step towards strengthening the visual analytics support for data pre‐processing in general and for uncertainty‐aware pre‐processing of multivariate time series in particular.  相似文献   

3.
Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.  相似文献   

4.
Interactive visualization tools are being used by an increasing number of members of the general public; however, little is known about how, and how well, people use visualizations to infer causality. Adapted from the mediation causal model, we designed an analytic framework to systematically evaluate human performance, strategies, and pitfalls in a visual causal reasoning task. We recruited 24 participants and asked them to identify the mediators in a fictitious dataset using bar charts and scatter plots within our visualization interface. The results showed that the accuracy of their responses as to whether a variable is a mediator significantly decreased when a confounding variable directly influenced the variable being analyzed. Further analysis demonstrated how individual visualization exploration strategies and interfaces might influence reasoning performance. We also identified common strategies and pitfalls in their causal reasoning processes. Design implications for how future visual analytics tools can be designed to better support causal inference are discussed.  相似文献   

5.
Designing introductory materials is extremely important when developing new information visualization techniques. All users, regardless of their domain knowledge, first must learn how to interpret the visually encoded information in order to infer knowledge from visualizations. Yet, despite its significance, there has been little research on how to design effective introductory materials for information visualization. This paper presents a study on the design of online guides that educate new users on how to utilize information visualizations, particularly focusing on the employment of exercise questions in the guides. We use two concepts from educational psychology, learning type (or learning style) and teaching method, to design four unique types of online guides. The effects of the guides are measured by comprehension tests of a large group of crowdsourced participants. The tests covered four visualization types (graph, scatter plot, storyline, and tree map) and a complete range of visual analytics tasks. Our statistical analyses indicate that online guides which employ active learning and the top‐down teaching method are the most effective. Our study provides quantitative insight into the use of exercise questions in online guides for information visualizations and will inspire further research on design considerations for other elements in introductory materials.  相似文献   

6.
The analysis of ocean and atmospheric datasets offers a unique set of challenges to scientists working in different application areas. These challenges include dealing with extremely large volumes of multidimensional data, supporting interactive visual analysis, ensembles exploration and visualization, exploring model sensitivities to inputs, mesoscale ocean features analysis, predictive analytics, heterogeneity and complexity of observational data, representing uncertainty, and many more. Researchers across disciplines collaborate to address such challenges, which led to significant research and development advances in ocean and atmospheric sciences, and also in several relevant areas such as visualization and visual analytics, big data analytics, machine learning and statistics. In this report, we perform an extensive survey of research advances in the visual analysis of ocean and atmospheric datasets. First, we survey the task requirements by conducting interviews with researchers, domain experts, and end users working with these datasets on a spectrum of analytics problems in the domain of ocean and atmospheric sciences. We then discuss existing models and frameworks related to data analysis, sense‐making, and knowledge discovery for visual analytics applications. We categorize the techniques, systems, and tools presented in the literature based on the taxonomies of task requirements, interaction methods, visualization techniques, machine learning and statistical methods, evaluation methods, data types, data dimensions and size, spatial scale and application areas. We then evaluate the task requirements identified based on our interviews with domain experts in the context of categorized research based on our taxonomies, and existing models and frameworks of visual analytics to determine the extent to which they fulfill these task requirements, and identify the gaps in current research. In the last part of this report, we summarize the trends, challenges, and opportunities for future research in this area. (see http://www.acm.org/about/class/class/2012 )  相似文献   

7.
Traditional multivariate clustering approaches are common in many geovisualization applications. These algorithms are used to define geodemographic profiles, ecosystems and various other land use patterns that are based on multivariate measures. Cluster labels are then projected onto a choropleth map to enable analysts to explore spatial dependencies and heterogeneity within the multivariate attributes. However, local variations in the data and choices of clustering parameters can greatly impact the resultant visualization. In this work, we develop a visual analytics framework for exploring and comparing the impact of geographical variations for multivariate clustering. Our framework employs a variety of graphical configurations and summary statistics to explore the spatial extents of clustering. It also allows users to discover patterns that can be concealed by traditional global clustering via several interactive visualization techniques including a novel drag & drop clustering difference view. We demonstrate the applicability of our framework over a demographics dataset containing quick facts about counties in the continental United States and demonstrate the need for analytical tools that can enable users to explore and compare clustering results over varying geographical features and scales.  相似文献   

8.
9.
In this paper, we present a novel illustrative multivariate visualization for geological modelling to assist geologists and reservoir engineers in visualizing multivariate datasets in superimposed representations, in contrast to the single‐attribute visualizations supported by commercial software. Our approach extends the use of decals from a single surface to 3D irregular grids, using the layering concept to represent multiple attributes. We also build upon prior work to augment the design and implementation of different geological attributes (namely, rock type, porosity, and permeability). More specifically, we propose a new sampling strategy to generate decals for porosity on the geological grid, a hybrid visualization for permeability which combines 2D decals and 3D ellipsoid glyphs, and a perceptually‐based design that allows us to visualize additional attributes (e.g., oil saturation) while avoiding visual interference between layers. Furthermore, our visual design draws from traditional geological illustrations, facilitating the understanding and communication between interdisciplinary teams. An evaluation by domain experts highlights the potential of our approach for geological modelling and interpretation in this complex domain.  相似文献   

10.
The analysis of behavioral city dynamics, such as temporal patterns of visited places and citizens' mobility routines, is an essential task for urban and transportation planning. Social media applications such as Foursquare and Twitter provide access to large‐scale and up‐to‐date dynamic movement data that not only help to understand the social life and pulse of a city but also to maintain and improve urban infrastructure. However, the fast growth rate of this data poses challenges for conventional methods to provide up‐to‐date, flexible analysis. Therefore, planning authorities barely consider it. We present a system and design study to leverage social media data that assist urban and transportation planners to achieve better monitoring and analysis of city dynamics such as visited places and mobility patterns in large metropolitan areas. We conducted a goal‐and‐task analysis with urban planning experts. To address these goals, we designed a system with a scalable data monitoring back‐end and an interactive visual analytics interface. The monitoring component uses intelligent pre‐aggregation to allow dynamic queries in near real‐time. The visual analytics interface leverages unsupervised learning to reveal clusters, routines, and unusual behavior in massive data, allowing to understand patterns in time and space. We evaluated our approach based on a qualitative user study with urban planning experts which demonstrates that intuitive integration of advanced analytical tools with visual interfaces is pivotal in making behavioral city dynamics accessible to practitioners. Our interviews also revealed areas for future research.  相似文献   

11.
There is an ongoing discussion in the visualization community about the relevant factors that render a visualization effective, expressive, memorable, aesthetically pleasing, etc. These factors lead to a large design space for visualizations. To explore this design space, qualitative research methods based on observations and interviews are often necessary. We describe an interview method that allows us to systematically acquire and assess important factors from subjective answers by interviewees. To this end, we adopt the repertory grid methodology in the context of visualization. It is based on the personal construct theory: each personality interprets a topic based on a set of personal, basic constructs expressed as contrasts. For the individual interpretation of visualizations, this means that these personal terms can be very different, depending on numerous influences, such as the prior experiences of the interviewed person. We present an interviewing process, visual interface, and qualitative and quantitative analysis procedures that are specifically devised to fit the needs of visualization applications. A showcase interview with 15 typical static information visualizations and 10 participants demonstrates that our approach is effective in identifying common constructs as well as individual differences. In particular, we investigate differences between expert and nonexpert interviewees. Finally, we discuss the differences to other qualitative methods and how the repertory grid can be embedded in existing theoretical frameworks of visualization research for the design process.  相似文献   

12.
Computer-based technology has played a significant role in crime prevention over the past 30 years, especially with the popularization of spatial databases and crime mapping systems. Police departments frequently use hotspot analysis to identify regions that should be a priority in receiving preventive resources. Practitioners and researchers agree that tracking crime over time and identifying its geographic patterns are vital information for planning efficiently. Frequently, police departments have access to systems that are too complicated and excessively technical, leading to modest usage. By working closely together with domain experts from police agencies of two different countries, we identified and characterized five domain tasks inherent to the hotspot analysis problem and developed SHOC, a visualization tool that strives for simplicity and ease of use in helping users to perform all the domain tasks. SHOC is included in a visual analytics system that allows users without technical expertise to annotate, save, and share analyses. We also demonstrate that our system effectively supports the completion of the domain tasks in two different real-world case studies.  相似文献   

13.
We present an argument for using visual analytics to aid Grounded Theory methodologies in qualitative data analysis. Grounded theory methods involve the inductive analysis of data to generate novel insights and theoretical constructs. Making sense of unstructured text data is uniquely suited for visual analytics. Using natural language processing techniques such as parts‐of‐speech tagging, retrieving information content, and topic modeling, different parts of the data can be structured and semantically associated, and interactively explored, thereby providing conceptual depth to the guided discovery process. We review grounded theory methods and identify processes that can be enhanced through visual analytic techniques. Next, we develop an interface for qualitative text analysis, and evaluate our design with qualitative research practitioners who analyze texts with and without visual analytics support. The results of our study suggest how visual analytics can be incorporated into qualitative data analysis tools, and the analytic and interpretive benefits that can result.  相似文献   

14.
We present a methodology for making sense of the communicative role of data visualizations in journalistic storytelling and share findings from surveying water-related data stories. Data stories are a genre of long-form journalism that integrate text, data visualization, and other visual expressions (e.g., photographs, illustrations, videos) for the purpose of data-driven storytelling. In the last decade, a considerable number of data stories about a wide range of topics have been published worldwide. Authors use a variety of techniques to make complex phenomena comprehensible and use visualizations as communicative devices that shape the understanding of a given topic. Despite the popularity of data stories, we, as scholars, still lack a methodological framework for assessing the communicative role of visualizations in data stories. To this extent, we draw from data journalism, visual culture, and multimodality studies to propose an interpretative framework in six stages. The process begins with the analysis of content blocks and framing elements and ends with the identification of dimensions, patterns, and relationships between textual and visual elements. The framework is put to the test by analyzing 17 data stories about water-related issues. Our observations from the survey illustrate how data visualizations can shape the framing of complex topics.  相似文献   

15.
The development of custom interactive visualization tools for specific domains and applications has been made much simpler recently by a surge of visualization tools, libraries and frameworks. Most of these tools are developed for classical data science applications, where a user is supported in analyzing measured or simulated data. But recently, there has also been an increasing interest in visual support for understanding machine learning algorithms and frameworks, especially for deep learning. Many, if not most, of the visualization support for (deep) learning addresses the developer of the learning system and not the end user (data scientist). Here we show on a specific example, namely the development of a matrix calculus algorithm, that supporting visualizations can also greatly benefit the development of algorithms in classical domains like in our case computer algebra. The idea is similar to visually supporting the understanding of learning algorithms, namely provide the developer with an interactive, visual tool that provides insights into the workings and, importantly, also into the failures of the algorithm under development. Developing visualization support for matrix calculus development went similar as the development of more traditional visual support systems for data analysts. First, we had to acquaint ourselves with the problem, its language and challenges by talking to the core developer of the matrix calculus algorithm. Once we understood the challenge, it was fairly easy to develop visual support that streamlined the development of the matrix calculus algorithm significantly.  相似文献   

16.
Color, as one of the most effective visual variables, is used in many techniques to encode and group data points according to different features. Relations between features and groups appear as visual patterns in the visualization. However, optical illusions may bias the perception at the first level of the analysis process. For instance, in pixel‐based visualizations contrast effects make pixels appear brighter if surrounded by a darker area, which distorts the encoded metric quantity of the data points. Even if we are aware of these perceptual issues, our visual cognition system is not able to compensate these effects accurately. To overcome this limitation, we present a color optimization algorithm based on perceptual metrics and color perception models to reduce physiological contrast or color effects. We evaluate our technique with a user study and find that the technique doubles the accuracy of users comparing and estimating color encoded data values. Since the presented technique can be used in any application without adaption to the visualization itself, we are able to demonstrate its effectiveness on data visualizations in different domains.  相似文献   

17.
It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which information visualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.  相似文献   

18.
Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. The derived taxonomies for group structure and visualization types are also applied to group visualizations of edges. We survey group‐only, group–node, group–edge and group–network tasks that are described in the literature as use cases of group visualizations. We discuss results from evaluations of existing visualization techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.  相似文献   

19.
We present Lyra, an interactive environment for designing customized visualizations without writing code. Using drag‐and‐drop interactions, designers can bind data to the properties of graphical marks to author expressive visualization designs. Marks can be moved, rotated and resized using handles; relatively positioned using connectors; and parameterized by data fields using property drop zones. Lyra also provides a data pipeline interface for iterative, visual specification of data transformations and layout algorithms. Visualizations created with Lyra are represented as specifications in Vega, a declarative visualization grammar that enables sharing and reuse. We evaluate Lyra's expressivity and accessibility through diverse examples and studies with journalists and visualization designers. We find that Lyra enables users to rapidly develop customized visualizations, covering a design space comparable to existing programming‐based tools.  相似文献   

20.
Providing tools that make visualization authoring accessible to visualization non‐experts is a major research challenge. Currently the most common approach to generating a visualization is to use software that quickly and automatically produces visualizations based on templates. However, it has recently been suggested that constructing a visualization with tangible tiles may be a more accessible method, especially for people without visualization expertise. There is still much to be learned about the differences between these two visualization authoring practices. To better understand how people author visualizations in these two conditions, we ran a qualitative study comparing the use of software to the use of tangible tiles, for the creation of bar charts. Close observation of authoring activities showed how each of the following varied according to the tool used: 1) sequences of action; 2) distribution of time spent on different aspects of the InfoVis pipeline; 3) pipeline task separation; and 4) freedom to manipulate visual variables. From these observations, we discuss the implications of the variations in activity sequences, noting tool design considerations and pointing to future research questions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号