首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The majority of display devices used in visualization are 2D displays. Inevitably, it is often necessary to overlay one piece of visual information on top of another, especially in applications such as multi‐field visualization and geo‐spatial information visualization. In this paper, we present a conceptual framework for studying the mechanisms for overlaying multiple pieces of visual information while allowing users to recover occluded information. We adopt the term ‘multiplexing’ from tele‐ and data communication to encompass all such overlapping mechanisms. We establish 10 categories of visual multiplexing mechanisms. We draw support evidence from both perception literature and existing works in visualization to support this conceptual framework. We examine the relationships between multiplexing and information theoretic measures. This new conceptual categorization provides the much‐needed theory of visualization with an integral component.  相似文献   

2.
Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.  相似文献   

3.
Cognitive radio technology, a revolutionary communication paradigm that can utilize the existing wireless spectrum resources more efficiently, has been receiving a growing attention in recent years. As network users need to adapt their operating parameters to the dynamic environment, who may pursue different goals, traditional spectrum sharing approaches based on a fully cooperative, static, and centralized network environment are no longer applicable. Instead, game theory has been recognized as an important tool in studying, modeling, and analyzing the cognitive interaction process. In this tutorial survey, we introduce the most fundamental concepts of game theory, and explain in detail how these concepts can be leveraged in designing spectrum sharing protocols, with an emphasis on state-of-the-art research contributions in cognitive radio networking. Research challenges and future directions in game theoretic modeling approaches are also outlined. This tutorial survey provides a comprehensive treatment of game theory with important applications in cognitive radio networks, and will aid the design of efficient, self-enforcing, and distributed spectrum sharing schemes in future wireless networks.  相似文献   

4.
In this paper, we introduce the concept of isosurface similarity maps for the visualization of volume data. Iso‐surface similarity maps present structural information of a volume data set by depicting similarities between individual isosurfaces quantified by a robust information‐theoretic measure. Unlike conventional histograms, they are not based on the frequency of isovalues and/or derivatives and therefore provide complementary information. We demonstrate that this new representation can be used to guide transfer function design and visualization parameter specification. Furthermore, we use isosurface similarity to develop an automatic parameter‐free method for identifying representative isovalues. Using real‐world data sets, we show that isosurface similarity maps can be a useful addition to conventional classification techniques.  相似文献   

5.
Multi‐modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice‐based viewing. We give an overview of state of the art in multi‐modal medical data visualization techniques. Multi‐modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three‐dimensional visualization techniques for multi‐modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication. Over the years, multiple techniques have been developed in order to cope with the various associated challenges and present the relevant information from multiple sources in an insightful way. We present an overview of these techniques and analyse the specific challenges that arise in multi‐modal data visualization and how recent works aimed to solve these, often using smart visibility techniques. We provide a taxonomy of these multi‐modal visualization applications based on the modalities used and the visualization techniques employed. Additionally, we identify unsolved problems as potential future research directions.  相似文献   

6.
可视化与可视分析已成为众多领域中结合人类智能与机器智能协同理解、分析数据的常见手段。人工智能可以通过对大数据的学习分析提高数据质量,捕捉关键信息,并选取最有效的视觉呈现方式,从而使用户更快、更准确、更全面地从可视化中理解数据。利用人工智能方法,交互式可视化系统也能更好地学习用户习惯及用户意图,推荐符合用户需求的可视化形式、交互操作和数据特征,从而降低用户探索的学习及时间成本,提高交互分析的效率。人工智能方法在可视化中的应用受到了极大关注,产生了大量学术成果。本文从最新工作出发,探讨人工智能在可视化流程的关键步骤中的作用。包括如何智能地表示和管理数据、如何辅助用户快速创建和定制可视化、如何通过人工智能扩展交互手段及提高交互效率、如何借助人工智能辅助数据的交互分析等。具体而言,本文详细梳理每个步骤中需要完成的任务及解决思路,介绍相应的人工智能方法(如深度网络结构),并以图表数据为例介绍智能可视化与可视分析的应用,最后讨论智能可视化方法的发展趋势,展望未来的研究方向及应用场景。  相似文献   

7.
Much of the visualization research has focused on improving the rendering quality and speed, and enhancing the perceptibility of features in the data. Recently, significant emphasis has been placed on focus+context (F+C) techniques (e.g., fisheye views and magnification lens) for data exploration in addition to viewing transformation and hierarchical navigation. However, most of the existing data exploration techniques rely on the manipulation of viewing attributes of the rendering system or optical attributes of the data objects, with users being passive viewers. In this paper, we propose a more active approach to data exploration, which attempts to mimic how we would explore data if we were able to hold it and interact with it in our hands. This involves allowing the users to physically or actively manipulate the geometry of a data object. While this approach has been traditionally used in applications, such as surgical simulation, where the original geometry of the data objects is well understood by the users, there are several challenges when this approach is generalized for applications, such as flow and information visualization, where there is no common perception as to the normal or natural geometry of a data object. We introduce a taxonomy and a set of transformations especially for illustrative deformation of general data exploration. We present combined geometric or optical illustration operators for focus+context visualization, and examine the best means for preventing the deformed context from being misperceived. We demonstrated the feasibility of this generalization with examples of flow, information and video visualization.  相似文献   

8.
Even as data and analytics-driven applications are becoming increasingly popular, retrieving data from shared databases poses a threat to the privacy of their users. For example, investors/patients retrieve records about stocks/diseases they are interested in from a stock/medical database. Knowledge of such interest is sensitive information that the database server would have access to, unless some mitigating measures are deployed. Private information retrieval (PIR) is a promising security primitive to protect the privacy of users’ interests. PIR allows the retrieval of a data record from a database without letting the database server know which record is being retrieved. The privacy guarantees could either be information theoretic or computational. Alternatively, anonymizers, which hide the identities of data users, may be used to protect the privacy of users’ interests for some situations. In this paper, we study rPIR, a new family of information-theoretic PIR schemes using ramp secret sharing. We have designed four rPIR schemes, using three ramp secret sharing approaches, achieving answer communication costs close to the cost of non-private information retrieval. Evaluation shows that, for many practical settings, rPIR schemes can achieve lower communication costs and the same level of privacy compared with traditional information-theoretic PIR schemes and anonymizers. Efficacy of the proposed schemes is demonstrated for two very different scenarios (outsourced data sharing and P2P content delivery) with realistic analysis and experiments. In many situations of these two scenarios, rPIR’s advantage of low communication cost outweighs its disadvantages, which results in less expenditure and/or better quality of service compared with what may be achieved if traditional information-theoretic PIR and anonymizers are used.  相似文献   

9.
We continue the study of priority or “greedy-like” algorithms as initiated in Borodin et al. (2003) [10] and as extended to graph theoretic problems in Davis and Impagliazzo (2009) [12]. Graph theoretic problems pose some modeling problems that did not exist in the original applications of Borodin et al. and Angelopoulos and Borodin (2002) [3]. Following the work of Davis and Impagliazzo, we further clarify these concepts. In the graph theoretic setting, there are several natural input formulations for a given problem and we show that priority algorithm bounds in general depend on the input formulation. We study a variety of graph problems in the context of arbitrary and restricted priority models corresponding to known “greedy algorithms”.  相似文献   

10.
Taylor  S.M. 《IT Professional》2004,6(6):28-34
Most readily available tools - basic search engines, possibly a news or information service, and perhaps agents and Web crawlers - are inadequate for many information retrieval tasks and downright dangerous for others. These tools either return too much useless material or miss important material. Even when such tools find useful information, the data is still in a text form that makes it difficult to build displays or diagrams. Employing the data in data mining or standard database operations, such as sorting and counting, can also be difficult. An emerging technology called information extraction (IE) is beginning to change all that, and you might already be using some very basic IE tools without even knowing it. Companies are increasingly applying IE behind the scenes to improve information and knowledge management applications such as text search, text categorization, data mining, and visualization (Rao, 2003). IE has also begun playing a key role in fields such as national security, law enforcement, insurance, and biomedical research, which have highly critical information and knowledge needs. In these fields, IE's powerful capabilities arc necessary to save lives or substantial investments of time and money. IE views language up close, considering grammar and vocabulary, and tries to determine the details of "who did what to whom" from a piece of text. In its most in-depth applications, IE is domain focused; it does not try to define all the events or relationships present in a piece of text, but focuses only on items of particular interest to the user organization.  相似文献   

11.
In the past several years, various ontologies and terminologies such as the Gene Ontology have been developed to enable interoperability across multiple diverse medical information systems. They provide a standard way of representing terms and concepts thereby supporting easy transmission and interpretation of data for various applications. However, with their growing utilization, not only has the number of available ontologies increased considerably, but they are also becoming larger and more complex to manage. Toward this end, a growing body of work is emerging in the area of modular ontologies where the emphasis is on either extracting and managing "modules" of an ontology relevant to a particular application scenario (ontology decomposition) or developing them independently and integrating into a larger ontology (ontology composition). In this paper, we investigate state-of-the-art approaches in modular ontologies focusing on techniques that are based on rigorous logical formalisms as well as well-studied graph theories. We analyze and compare how such approaches can be leveraged in developing tools and applications in the biomedical domain. We conclude by highlighting some of the limitations of the modular ontology formalisms and put forward additional requirements to steer their future development.  相似文献   

12.
Distributed data mining applications involving user interaction are now feasible due to advances in processor speed and network bandwidth. These applications are traditionally implemented using ad-hoc communication protocols, which are often either cumbersome or inefficient. This paper presents and evaluates a system for sharing state among such interactive distributed data mining applications, developed with the goal of providing both ease of programming and efficiency. Our system, called InterAct, supports data sharing efficiently by allowing caching, by communicating only the modified data, and by allowing relaxed coherence requirement specification for reduced communication overhead, as well as placement of data for improved locality, on a per client and per data structure basis. Additionally, our system supports the ability to supply clients with consistent copies of shared data even while the data is being modified.We evaluate the performance of the system on a set of data mining applications that perform queries on data structures that summarize information from the databases of interest. We demonstrate that providing a runtime system such as InterAct results in a 10–30 fold improvement in execution time due to shared data caching, the applications' ability to tolerate stale data (client-controlled coherence), and the ability to off-load some of the computation from the server to the client. Performance is improved without requiring complex communication protocols to be built into the application, since the runtime system uses knowledge about application behavior (encoded by specifying coherence requirements) in order to automatically optimize the resources utilized for communication. We also demonstrate that for our benchmark tests, the quality of the results generated is not significantly deteriorated due to the use of more relaxed coherence protocols.  相似文献   

13.
Data warehouses (DW) form the backbone of data integration that is necessary for analytical applications, and play important roles in the information technology landscape of many industries. We introduce an approach for addressing the fundamental problem of semantic heterogeneity in the design of data integration requirements during DW development. In contrast to ontology-driven or schema-matching approaches, which propose the automatic resolution of differences ex-post, our approach addresses the core problem of data integration requirements: understanding and resolving different contextual meanings of data fields. We ground the approach firmly in communication theory and build on practices from agile software development. Besides providing relevant insights for the design of data integration requirements, our findings point to communication theory as a sound underlying foundation for a design theory of information systems development.  相似文献   

14.
Interactive visualization of state transition systems   总被引:2,自引:0,他引:2  
A new method for the visualization of state transition systems is presented. Visual information is reduced by clustering nodes, forming a tree structure of related clusters. This structure is visualized in three dimensions with concepts from cone trees and emphasis on symmetry. A number of interactive options are provided as well, allowing the user to superimpose detail information on this tree structure. The resulting visualization enables the user to relate features in the visualization of the state transition graph to semantic concepts in the corresponding process and vice versa  相似文献   

15.
Since its inception, situation theory has been concerned with the situated nature of meaning and cognition, a theme which has also recently gained some prominence in Artificial Intelligence. Channel theory is a recently developed framework which builds on concepts introduced in situation theory, in an attempt to provide a general theory of information flow. In particular, the channel theoretic framework offers an account of fallible regularities, regularities which provide enough structure to an agent's environment to support efficient cognitive processing but which are limited in their reliability to specific circumstances. This paper describes how this framework can lead to a different perspective on defeasible reasoning: rather than being seen as reasoning with incomplete information, an agent makes use of a situated regularity, choosing to use the regularity that seems best suited (trading off reliability and simplicity) to the circumstances it happens to find itself in. We present a formal model for this task, based on the channel theoretic framework, and sketch how the model may be used as the basis for a methodology of defeasible situated reasoning, whereby agents reason with simple monotonic regularities but may revise their choice of regularity on learning more about their circumstances.  相似文献   

16.
信息隐藏技术中的数字水印研究   总被引:4,自引:0,他引:4  
随着计算机、网络和通信技术的飞速发展,特别是Internet的普及,信息的安全保护问题日益突出。信息隐藏学作为隐蔽通信和知识产权保护等的有效手段,正得到广泛的研究与应用。该文以信息隐藏技术在多媒体数据版权保护领域的应用———数字水印技术为重点,对相关的研究进展情况做了较系统的论述。首先介绍了信息隐藏技术的一般概念和基本原理,然后重点分析了信息隐藏技术中的数字水印的通用模型、基本特征、典型算法及其攻击方法,在此基础上介绍了数字水印在不同领域中的应用。最后对数字水印技术的发展方向及其应用前景做了预测和展望。  相似文献   

17.
We propose a unified approach to various sensor network applications, using supervised learning. Supervised learning refers to learning from examples, in the form of input-output pairs, by which a system that isn't programmed in advance can estimate an unknown function and predict its values for inputs outside the training set. In particular, we examined random wireless sensor networks, in which nodes are randomly distributed in the region of deployment. When operating normally, nodes communicate and collaborate only with other nearby nodes (within communication range). However, a base station - with a more powerful computer on board - can query a node or group of nodes when necessary and perform data fusion. Learning techniques have been applied in many diverse scenarios. Preliminary research shows that a well-known algorithm from learning theory effectively applies to environmental monitoring, tracking of moving objects and plumes, and localization. We considered some basic concepts of learning theory and how they might address the needs of random wireless sensor networks.  相似文献   

18.
Preservation of data privacy and protection of sensitive information from potential adversaries constitute a key socio-technical challenge in the modern era of ubiquitous digital transformation. Addressing this challenge needs analysis of multiple factors: algorithmic choices for balancing privacy and loss of utility, potential attack scenarios that can be undertaken by adversaries, implications for data owners, data subjects, and data sharing policies, and access control mechanisms that need to be built into interactive data interfaces. Visualization has a key role to play as part of the solution space, both as a medium of privacy-aware information communication and also as a tool for understanding the link between privacy parameters and data sharing policies. The field of privacy-preserving data visualization has witnessed progress along many of these dimensions. In this state-of-the-art report, our goal is to provide a systematic analysis of the approaches, methods, and techniques used for handling data privacy in visualization. We also reflect on the road-map ahead by analyzing the gaps and research opportunities for solving some of the pressing socio-technical challenges involving data privacy with the help of visualization.  相似文献   

19.
This paper identifies requirements for an engineering design information management system. Future CAD systems must support a wide range of activities — such as definition, manipulation and analyses of complex product information models. These models represent not only conventional data associated with current CAD applications, but also design information characterizing the correlations between the requirements, functions, behaviors and physical form of the product. Such functionality is important for both the individual designer and the design organization, as the need to manage information as a corporate asset is becoming a critical component of business strategy. This paper explores these needs using two design studies. The first study illustrates some major concepts relative to non-routine design activities, while the second study focuses on the routine design activities relative to organization interactions. These studies were used to elicit high level requirements which serve as the basis for the development of prototype software systems. These prototypes are briefly introduced here.  相似文献   

20.
Info-margin maximization for feature extraction   总被引:1,自引:0,他引:1  
We propose a novel method of linear feature extraction with info-margin maximization (InfoMargin) from information theoretic viewpoint. It aims to achieve a low generalization error by maximizing the information divergence between the distributions of different classes while minimizing the entropy of the distribution in each single class. We estimate the density of data in each class with Gaussian kernel Parzen window and develop an efficient and fast convergent algorithm to calculate quadratic entropy and divergence measure. Experimental results show that our method outperforms the traditional feature extraction methods in the classification and data visualization tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号