首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The idea behind sonification is that synthetic non-verbal sounds can represent numerical data and provide support for information processing activities of many different kinds. This article describes some of the ways that sonification has been used in assistive technologies, remote collaboration, engineering analyses, scientific visualisations, emergency services and aircraft cockpits. Approaches for designing sonifications are surveyed, and issues raised by the existing approaches and applications are outlined. Relations are drawn to other areas of knowledge where similar issues have also arisen, such as human-computer interaction, scientific visualisation, and computer music. At the end is a list of resources that will help you delve further into the topic.  相似文献   

2.
The State of the Art in Flow Visualisation: Feature Extraction and Tracking   总被引:3,自引:0,他引:3  
Flow visualisation is an attractive topic in data visualisation, offering great challenges for research. Very large data sets must be processed, consisting of multivariate data at large numbers of grid points, often arranged in many time steps. Recently, the steadily increasing performance of computers again has become a driving force for new advances in flow visualisation, especially in techniques based on texturing, feature extraction, vector field clustering, and topology extraction. In this article we present the state of the art in feature‐based flow visualisation techniques. We will present numerous feature extraction techniques, categorised according to the type of feature. Next, feature tracking and event detection algorithms are discussed, for studying the evolution of features in time‐dependent data sets. Finally, various visualisation techniques are demonstrated. ACM CSS: I.3.8 Computer Graphics—applications  相似文献   

3.
Abstract. The paper proposes a new method for efficient triangulation of large, unordered sets of 3D points using a CAD model comprising NURBS entities. It is primarily aimed at engineering applications involving analysis and visualisation of measured data, such as inspection, where a model of the object in question is available. Registration of the data to the model is the necessary first step, enabling the triangulation to be efficiently performed in 2D, on the projections of the measured points onto the model entities. The derived connectivity is then applied to the original 3D data. Improvement of the generated 3D mesh is often necessary, involving mesh smoothing, constraint-based elimination of redundant triangles and merging of mesh patches. Examples involving random measurements on aerospace and automotive free-form components are presented. Received: 30 August 1999 / Accepted: 10 January 2000  相似文献   

4.
Language visualisation consists of using consistent and systematic mappings between language expressions and graphical forms, where the graphical forms constitute or convey the meaning of the expressions. Primitive-based applications are described for both natural and artificial language (story visualisation and program visualisation, respectively). On the basis of these and other applications some foundational concepts are identified in a bottom-up theory of visualisation. A universal visualisation system architecture is proposed, as is a basic visual object taxonomy for classifying any visualisation object. Also, preliminary steps are taken towards constructing a top-down theory.  相似文献   

5.
We have developed a novel approach to the extraction of cloud base height (CBH) from pairs of whole-sky imagers (WSIs). The core problem is to spatially register cloud fields from widely separated WSIs; this complete, triangulation provides the CBH measurements. The wide camera separation and the self-similarity of clouds defeats standard matching algorithms when applied to static views of the sky. In response, we use optical flow methods that exploit the fact that modern WSIs provide image sequences. We will describe the algorithm, a confidence metric for its performance, a method to correct the severe projective effects of the WSI camera, and results on real data.  相似文献   

6.
Optimized triangle mesh reconstruction from unstructured points   总被引:3,自引:1,他引:3  
A variety of approaches have been proposed for polygon mesh reconstruction from a set of unstructured sample points. Suffering from severe aliases at sharp features and having a large number of unnecessary faces, most resulting meshes need to be optimized using input sample points in a postprocess. In this paper, we propose a fast algorithm to reconstruct high-quality meshes from sample data. The core of our proposed algorithm is a new mesh evaluation criterion which takes full advantage of the relation between the sample points and the reconstructed mesh. Based on our proposed evaluation criterion, we develop necessary operations to efficiently incorporate the functions of data preprocessing, isosurface polygonization, mesh optimization and mesh simplification into one simple algorithm, which can generate high-quality meshes from unstructured point clouds with time and space efficiency. Published online: 28 January 2003 Correspondence to: Y.-J. Liu  相似文献   

7.
Information visualisation often requires good navigation aids on large trees, which represent the underlying abstract information. Using trees for information visualisation requires novel user interface techniques, visual clues, and navigational aids. This paper describes a visual clue: using the so-called Strahler numbers, a map is provided that indicates which parts of the tree are interesting. A second idea is that of "folding" away subtrees that are too "different" in some sense, thereby reducing the visual complexity of the tree. Examples are given demonstrating these techniques, and what the further challenges in this area are.  相似文献   

8.
This paper reports a systematic review of shared visualisation based on fifteen papers from 2000 to 2013. The findings identified five shared visualisation strategies that represent the ways implemented to process data sharing and knowledge to arrive at the desired level of understanding. Four visualisation techniques were also identified to show how shared cognition is made possible in designing tools for mediating data or knowledge among the users involved. These findings provide research opportunities in integrating rich interactive data visualisation for mobile-based technologies as an effective mean in supporting collaborative work. Finally, social, task and cognitive elements which can be significantly supported by shared visualisation and a guideline for future researchers seeking to design shared visualisation-based systems are presented.  相似文献   

9.
The opportunities for context-aware computing are fast expanding. Computing systems can be made aware of their environment by monitoring attributes such as their current location, the current time, the weather, or nearby equipment and users. Context-aware computing often involves retrieval of information: it introduces a new aspect to technologies for information delivery; currently these technologies are based mainly on contemporary approaches to information retrieval and information filtering. In this paper, we consider how the closely related, but distinct, topics of information retrieval and information filtering relate to context-aware retrieval. Our thesis is that context-aware retrieval is as yet a sparsely researched and sparsely understood area, and we aim in this paper to make a start towards remedying this.  相似文献   

10.
The introduction of integrative approaches to biomedical research (integrative biology, physiome, Virtual Physiological Human, etc.) poses original problems to computer aided medicine: the need to operate with large amounts of data that are strongly heterogeneous in structure, format and even in the knowledge domain that generated them; the need to integrate all of these data into a coherent whole; the further complication imposed by the fact that more and more frequently these data are captured at very different dimensional and/or temporal scales. The present study describes a first attempt at providing an interactive visualisation environment for homogeneous biomedical data defined over radically different spatial or temporal scales. In particular, we describe new strategies for the management of the dimensional information of highly heterogeneous data types; the management of temporal multiscaling; for 3D unstructured spatial multiscale visualisation and the related interaction paradigms and user interface. Preliminary results with a prototype implementation based on the OpenMAF application framework (http://www.openmaf.org) indicate that it is possible to develop effective environments for interactive visualisation of multiscale biomedical data.  相似文献   

11.
This study analysed the impact of age and domain knowledge on the usability of some of the state-of-the-art opinion visualisation techniques. A questionnaire survey was designed to ask the users’ level of agreement or disagreement about the selected opinion visualisation techniques against a set of information visualisation metrics. The data were collected by conducting seminars and using a web-based online questionnaire. We categorised participants (N?=?146) into three age groups (≤20 years: teenager; 21–30 years: young adults; >30: adults). According to domain knowledge, participants are classified into two groups, one having knowledge of human computer interaction (HCI users) and the other without this knowledge (non-HCI users). The collected data were analysed using an independent sample t-test and analysis of variance. It is concluded that there are significant differences between the perception of HCI and non-HCI users on visual appeal, understandability, user friendliness, intuitiveness, informativeness, usefulness, comprehensiveness, comparison ability, and pre-knowledge requirement. Moreover, age was found to be significant for visual appeal, comprehensiveness, intuitiveness, and pre-knowledge requirement.  相似文献   

12.
Data overload is a generic and tremendously difficult problem that has only grown with each new wave of technological capabilities. As a generic and persistent problem, three observations are in need of explanation: Why is data overload so difficult to address? Why has each wave of technology exacerbated, rather than resolved, data overload? How are people, as adaptive responsible agents in context, able to cope with the challenge of data overload? In this paper, first we examine three different characterisations that have been offered to capture the nature of the data overload problem and how they lead to different proposed solutions. As a result, we propose that (a) data overload is difficult because of the context sensitivity problem – meaning lies, not in data, but in relationships of data to interests and expectations and (b) new waves of technology exacerbate data overload when they ignore or try to finesse context sensitivity. The paper then summarises the mechanisms of human perception and cognition that enable people to focus on the relevant subset of the available data despite the fact that what is interesting depends on context. By focusing attention on the root issues that make data overload a difficult problem and on people’s fundamental competence, we have identified a set of constraints that all potential solutions must meet. Notable among these constraints is the idea that organisation precedes selectivity. These constraints point toward regions of the solution space that have been little explored. In order to place data in context, designers need to display data in a conceptual space that depicts the relationships, events and contrasts that are informative in a field of practice.  相似文献   

13.
The major drawback of the existing cluster placement scheme is the long response time caused by admission control if the number of clusters and the number of users are large. A circular skip-cluster placement scheme is proposed to reduce the size of the data buffer as well as the system response time. Furthermore, the popularity of each video is different in the real world. We propose a new popularity-based data allocation scheme to allocate data units within a cluster such that the corresponding data units of these popular videos are stored in those cylinders at one end of each cluster. Due to a higher spatial locality within these hot cylinders, some data units requested by the users are stored in the same cylinder such that one seek operation, one rotation, and one transfer operation are required to retrieve these data units. Therefore, the time required to retrieve data for these requests can be reduced, thus also reducing the system response time. Based on our results, the buffer size and the system response time can be reduced by half or more. These findings are essential for constructing video-on-demand systems that provide satisfactory performance.  相似文献   

14.
In order to get useful information from various kinds of information sources, we first apply a searching process with query statements to retrieve candidate data objects (called a hunting process in this paper) and then apply a browsing process to check the properties of each object in detail by visualizing candidates. In traditional information retrieval systems, the hunting process determines the quality of the result, since there are only a few candidates left for the browsing process. In order to retrieve data from widely distributed digital libraries, the browsing process becomes very important, since the properties of data sources are not known in advance. After getting data from various information sources, a user checks the properties of data in detail using the browsing process. The result can be used to improve the hunting process or for selecting more appropriate visualization parameters. Visualization relationships among data are very important, but will become too time-consuming if the amount of data in the candidate set is large, for example, over one hundred objects. One of the important problems in handling information retrieval from a digital library is to create efficient and powerful visualization mechanisms for the browsing process. One promising way to solve the visualization problem is to map each candidate data object into a location in three-dimensional (3D) space using a proper distance definition. In this paper, we will introduce the functions and organization of a system having a browsing navigator to achieve an efficient browsing process in 3D information search space. This browsing navigator has the following major functions: ?1. Selection of features which determine the distance for visualization, in order to generate a uniform distribution of candidate data objects in the resulting space. ?2. Calculation of the location of the data objects in 2D space using the selected features. ?3. Construction of 3D browsing space by combining 2D spaces, in order to find the required data objects easily. ?4. Generation of the oblique views of 3D browsing space and data objects by reducing the overlap of data objects in order to make navigation easy for the user in 3D space. ?Examples of this browsing navigator applied to book data are shown. Received: 15 December 1997 / Revised: June 1999  相似文献   

15.
Locating and accessing data repositories with WebSemantics   总被引:1,自引:0,他引:1  
Many collections of scientific data in particular disciplines are available today on the World Wide Web. Most of these data sources are compliant with some standard for interoperable access. In addition, sources may support a common semantics, i.e., a shared meaning for the data types and their domains. However, sharing data among a global community of users is still difficult because of the following reasons: (i) data providers need a mechanism for describing and publishing available sources of data; (ii) data administrators need a mechanism for discovering the location of published sources and obtaining metadata from these sources; and (iii) users need a mechanism for browsing and selecting sources. This paper describes a system, WebSemantics, that accomplishes the above tasks. We describe an architecture for the publication and discovery of scientific data sources, which is an extension of the World Wide Web architecture and protocols. We support catalogs containing metadata about data sources for some application domain. We define a language for discovering sources and querying their metadata. We then describe the WebSemantics prototype. Edited by H. Korth. Received: 15 July 1999 / Accepted: 13 September 2000 Published online: 16 April 2002  相似文献   

16.
Abstract. This paper presents structural recursion as the basis of the syntax and semantics of query languages for semistructured data and XML. We describe a simple and powerful query language based on pattern matching and show that it can be expressed using structural recursion, which is introduced as a top-down, recursive function, similar to the way XSL is defined on XML trees. On cyclic data, structural recursion can be defined in two equivalent ways: as a recursive function which evaluates the data top-down and remembers all its calls to avoid infinite loops, or as a bulk evaluation which processes the entire data in parallel using only traditional relational algebra operators. The latter makes it possible for optimization techniques in relational queries to be applied to structural recursion. We show that the composition of two structural recursion queries can be expressed as a single such query, and this is used as the basis of an optimization method for mediator systems. Several other formal properties are established: structural recursion can be expressed in first-order logic extended with transitive closure; its data complexity is PTIME; and over relational data it is a conservative extension of the relational calculus. The underlying data model is based on value equality, formally defined with bisimulation. Structural recursion is shown to be invariant with respect to value equality. Received: July 9, 1999 / Accepted: December 24, 1999  相似文献   

17.
Towards cognitive evaluation of computer-drawn sketches   总被引:1,自引:0,他引:1  
This paper seeks to raise visual comparisons beyond subjective opinions into evidence-based visual reasoning. It provides an informal deductive analysis of the marks in sketches derived with two competing line-filtering algorithms. This prompted the novel speculation that the visual system might be placing some types of anomalies in the foreground of mental 3D space, where they can be ignored. A brief survey is provided to encourage informed debate. Although the proposed cognitive computation was not automated, it justified the rejection of the Douglas–Peucker algorithm in favour of Visvalingam's algorithm in subsequent research within the Cartographic Information Systems Research Group (CISRG).  相似文献   

18.
An airborne air-to-ground data link communication interface was evaluated in a multi-sector-planning scenario using an Airbus A 340 full flight simulator. In a close-to-reality experimental setting, eight professional crews performed a flight mission in a mixed voice/data link environment. Experimental factors were the medium (voice vs. data link), workload (low vs. high) and the role in the cockpit (pilot flying vs. pilot non-flying). Data link communication and the usability of the newly developed communication interface were rated positively by the pilots, but there is a clear preference for using a data link only in the phase of cruise. Cognitive demands were determined for selected sections of en-route flight. Demands are affected mainly by increased communication needs. In the pilots’ view, although a data link has no effect on safety or the possibilities of intervention, it causes more problems. The subjective workload, as measured with the NASA Task Load Index, increased moderately under data link conditions. A data link has no general effect on pilots’ situation awareness although flight plan negotiations with a data link cause a distraction of attention from monitoring tasks. The use of a data link has an impact on air-to-ground as well as intra-crew communication. Under data link conditions the pilot non-flying plays a more active role in the cockpit. Before introducing data link communication, several aspects of crew resource management have to be reconsidered. Correspondence and offprint requests to: T. Müller, Technical University of Berlin, Institute of Psychology and Ergonomics, Department of Human–Machine Systems, Jebensstrasse 1, 10623 Berlin, Germany.  相似文献   

19.
Three dimensional visualisation techniques have been used as a powerful tool in surgical and therapeutic applications. Due to large medical data, huge computations are necessary on 3D visualisation, especially for a real-time system. Many existing methods are sequential, which are too slow to be practical in real applications. In our previous work, we showed boundary detection and feature points extraction by using Hopfield networks. In this paper, a new feature points matching method for 3D surfaces using a Hopfield neural network is proposed. Taking advantage of parallel and energy convergence capabilities in the Hopfield networks, this method is faster and more stable for feature points matching. Stereoscopic visualisation is the display result of our system. With stereoscopic visualisation, the 3D liver used in the experiment can leap out of the screen in true 3D stereoscopic depth. This increases a doctor's ability to analyse complex graphics.  相似文献   

20.
Large multimedia document archives may hold a major fraction of their data in tertiary storage libraries for cost reasons. This paper develops an integrated approach to the vertical data migration between the tertiary, secondary, and primary storage in that it reconciles speculative prefetching, to mask the high latency of the tertiary storage, with the replacement policy of the document caches at the secondary and primary storage level, and also considers the interaction of these policies with the tertiary and secondary storage request scheduling. The integrated migration policy is based on a continuous-time Markov chain model for predicting the expected number of accesses to a document within a specified time horizon. Prefetching is initiated only if that expectation is higher than those of the documents that need to be dropped from secondary storage to free up the necessary space. In addition, the possible resource contention at the tertiary and secondary storage is taken into account by dynamically assessing the response-time benefit of prefetching a document versus the penalty that it would incur on the response time of the pending document requests. The parameters of the continuous-time Markov chain model, the probabilities of co-accessing certain documents and the interaction times between successive accesses, are dynamically estimated and adjusted to evolving workload patterns by keeping online statistics. The integrated policy for vertical data migration has been implemented in a prototype system. The system makes profitable use of the Markov chain model also for the scheduling of volume exchanges in the tertiary storage library. Detailed simulation experiments with Web-server-like synthetic workloads indicate significant gains in terms of client response time. The experiments also show that the overhead of the statistical bookkeeping and the computations for the access predictions is affordable. Received January 1, 1998 / Accepted May 27, 1998  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号