首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 427 毫秒
1.
This paper presents a short-contact multitouch vocabulary for interacting with scatterplot matrices (SPLOMs) on wall-sized displays. Fling-based gestures overcome central interaction challenges of such large displays by avoiding long swipes on the typically blunt surfaces, frequent physical navigation by walking for accessing screen areas beyond arm's reach in the horizontal direction and uncomfortable postures for accessing screen areas in the vertical direction. Furthermore, we make use of the display's high resolution and large size by supporting the efficient specification of two-tiered focus + context regions which are consistently propagated across the SPLOM. These techniques are complemented by axis-centered and lasso-based selection techniques for specifying subsets of the data. An expert review as well as a user study confirmed the potential and general usability of our seamlessly integrated multitouch interaction techniques for SPLOMs on large vertical displays.  相似文献   

2.
Multitouch input devices afford effective solutions for 6DOF (six Degrees of Freedom) manipulation of 3D objects. Mainly focusing on large‐size multitouch screens, existing solutions typically require at least three fingers and bimanual interaction for full 6DOF manipulation. However, single‐hand, two‐finger operations are preferred especially for portable multitouch devices (e.g., popular smartphones) to cause less hand occlusion and relieve the other hand for necessary tasks like holding the devices. Our key idea for full 6DOF control using only two contact fingers is to introduce two manipulation modes and two corresponding gestures by examining the moving characteristics of the two fingers, instead of the number of fingers or the directness of individual fingers as done in previous works. We solve the resulting binary classification problem using a learning‐based approach. Our pilot experiment shows that with only two contact fingers and typically unimanual interaction, our technique is comparable to or even better than the state‐of‐the‐art techniques.  相似文献   

3.
The area of multitouch interaction research is at its infancy. The commercial sector has seen an exponential growth in this area with ubiquitous products like Apple i-Phone, i-Pad, and Microsoft surface table. In spite of their popularity, developers are still finding it difficult to extend this novel interface to engineering applications such as computer aided design (CAD), due to insufficient understanding of the factors that affect the multitouch interface interaction when applied to CAD operations. The objective of this research is to (1) outline the key elements of the multitouch interface for CAD, (2) identify the factors affecting the performance of a multitouch enabled CAD modeling environment, and (3) lay a foundation for future research and highlight the directions for extending the multitouch interface for CAD and other engineering applications. To demonstrate specific results we have conducted mouse emulation experiments. We compared the performance of two finger touch-based interaction techniques (drag state finger touch and track state finger touch) and a standard mouse device for 3D CAD modeling operations. The results indicated that both the task completion time and error rates are statistically the same for both the finger touch-based techniques. However, the error concentration observed from the experiments revealed that for the edge selection tasks, the track state technique is more suited than the drag state technique. Both the finger touch-based techniques suffered from precise dimension control while executing the tasks. The inclusion of a grid on the design space for modeling purpose reduced user errors. The mouse device outperformed both the finger touch-based techniques and yielded statistically better results in terms of task completion time and error rates.  相似文献   

4.
《Computers & Graphics》2012,36(8):1119-1131
Multi-touch interfaces have emerged with the widespread use of smartphones. Although a lot of people interact with 2D applications through touchscreens, interaction with 3D applications remains little explored. Most 3D object manipulation techniques have been created by designers who have generally put users aside from the design creation process. We conducted a user study to better understand how non-technical users tend to interact with a 3D object from touchscreen inputs. The experiment consists of 3D cube manipulations along three viewpoints for rotations, scaling and translations (RST). Sixteen users participated and 432 gestures were analyzed. To classify data, we introduce a taxonomy for 3D manipulation gestures with touchscreens. Then, we identify a set of strategies employed by users to perform the proposed cube transformations. Our findings suggest that each participant uses several strategies with a predominant one. Furthermore, we conducted a study to compare touchscreen and mouse interaction for 3D object manipulations. The results suggest that gestures are different according to the device, and touchscreens are preferred for the proposed tasks. Finally, we propose some guidelines to help designers in the creation of more user friendly tools.  相似文献   

5.
Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two‐dimensional (2D) video sessions and whether this type of teaching results in superior learning. The research goal was to compare learning in 2 virtual learning scenarios—on 2D displays and with an identical 3DS scenario. Participants watched a 2D or 3DS video of an instructor demonstrating a box origami paper‐folding task. We compared participants' folding test scores and self‐assessment questionnaires of the teaching scenarios and calculated their cognitive load index (CLI) based on electroencephalogram measurements during the observation periods. Results showed a highly significant difference between participants' folding test scores, CLI, and self‐assessment questionnaire results in 2D compared to 3DS sessions. Our findings indicate that employing stereoscopic 3D technology over 2D displays in the design of emerging virtual and augmented reality applications in distance learning has advantages.  相似文献   

6.
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.  相似文献   

7.
This paper describes MTi, a biometric method for user identification on multitouch displays. The method is based on features obtained only from the coordinates of the 5 touchpoints of one of the user's hands. This makes MTi applicable to all multitouch displays large enough to accommodate a human hand and detect 5 or more touchpoints without requiring additional hardware and regardless of the display's underlying sensing technology. MTi only requests that the user places his hand on the display with the fingers comfortably stretched apart. A dataset of 34 users was created on which our method reported 94.69% identification accuracy. The method's scalability was tested on a subset of the Bosphorus hand database (100 users, 94.33% identification accuracy) and a usability study was performed.  相似文献   

8.
We survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under‐explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction.  相似文献   

9.
We present a pressure‐augmented tactile 3D data navigation technique, specifically designed for small devices, motivated by the need to support the interactive visualization beyond traditional workstations. While touch input has been studied extensively on large screens, current techniques do not scale to small and portable devices. We use phone‐based pressure sensing with a binary mapping to separate interaction degrees of freedom (DOF) and thus allow users to easily select different manipulation schemes (e. g., users first perform only rotation and then with a simple pressure input to switch to translation). We compare our technique to traditional 3D‐RST (rotation, scaling, translation) using a docking task in a controlled experiment. The results show that our technique increases the accuracy of interaction, with limited impact on speed. We discuss the implications for 3D interaction design and verify that our results extend to older devices with pseudo pressure and are valid in realistic phone usage scenarios.  相似文献   

10.
In the context of mid-air manipulation, this paper presents the effects of allowing the user to dynamically switch between 1DOF, 2DOF, and 3DOF operations. Such “manipulation with switchable DOF” has been widely used in commercial graphics packages and is increasingly being adopted for virtual reality editing, where the 3D scene is constructed through mid-air interaction in immersive virtual environments. However, its effectiveness and advantages/disadvantages have not been investigated. This paper compares “manipulation with switchable DOF” with “manipulations with DOF separation,” which allows only 1DOF operations, and “manipulation without DOF separation,” which provides 3DOF operations. Using translation, rotation, and scaling, three methods were evaluated in terms of completion time and precision. The experiment results showed that “manipulation with switchable DOF” outperformed “manipulation with DOF separation” in terms of completion time whereas three methods were comparable in terms of precision. “Manipulation with switchable DOF” was further analyzed, and the results showed that the more 3DOF operations led to the shorter completion time and the more 1DOF operations led to the higher precision.  相似文献   

11.
A taxonomy of 3D occlusion management for visualization   总被引:1,自引:0,他引:1  
While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the "gaps" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management.  相似文献   

12.
13.
Interaction with high-resolution wall-sized (Powerwall) displays can be a tedious and difficult task due to large display areas and small target sizes. To overcome this, we developed techniques that reduce the precision required to manipulate windows and select data. The manipulation layer speeds up the common tasks of moving and resizing application windows by overlaying them with large, transparent target areas. The Power-Lens magnifies target sizes by automatically appearing once the cursor reaches the region of interest. Two experiments evaluated these techniques against conventional desktop-style interfaces. Experiment 1 showed the window manipulation layer to speed up the tasks of moving and resizing a window by 24% and 27%, respectively. Experiment 2 showed the Power-Lens to speed up the selection of 5?×?5 pixel targets by 18%. Together, our new techniques help to make interaction more fluid on Powerwall displays.  相似文献   

14.
Visualizing network data   总被引:3,自引:0,他引:3  
Networks are critical to modern society, and a thorough understanding of how they behave is crucial to their efficient operation. Fortunately, data on networks is plentiful; by visualizing this data, it is possible to greatly improve our understanding. Our focus is on visualizing the data associated with a network and not on simply visualizing the structure of the network itself. We begin with three static network displays; two of these use geographical relationships, while the third is a matrix arrangement that gives equal emphasis to all network links. Static displays can be swamped with large amounts of data; hence we introduce direct manipulation techniques that permit the graphs to continue to reveal relationships in the context of much more data. In effect, the static displays are parameterized so that interesting views may easily be discovered interactively. The software to carry out this network visualization is called SeeNet  相似文献   

15.
We discuss spatial selection techniques for three‐dimensional datasets. Such 3D spatial selection is fundamental to exploratory data analysis. While 2D selection is efficient for datasets with explicit shapes and structures, it is less efficient for data without such properties. We first propose a new taxonomy of 3D selection techniques, focusing on the amount of control the user has to define the selection volume. We then describe the 3D spatial selection technique Tangible Brush, which gives manual control over the final selection volume. It combines 2D touch with 6‐DOF 3D tangible input to allow users to perform 3D selections in volumetric data. We use touch input to draw a 2D lasso, extruding it to a 3D selection volume based on the motion of a tangible, spatially‐aware tablet. We describe our approach and present its quantitative and qualitative comparison to state‐of‐the‐art structure‐dependent selection. Our results show that, in addition to being dataset‐independent, Tangible Brush is more accurate than existing dataset‐dependent techniques, thus providing a trade‐off between precision and effort.  相似文献   

16.
We present a new approach for sketching free form meshes with topology consistency. Firstly, we interpret the given 2D curve to be the projection of the 3D curve with the minimum curvature. Then we adopt a topology-consistent strategy based on the graph rotation system, to trace the simple faces on the interconnecting 3D curves. With the face tracing algorithm, our system can identify the 3D surfaces automatically. After obtaining the boundary curves for the faces, we apply Delaunay triangulation on these faces. Finally, the shape of the triangle mesh that follows the 3D boundary curves is computed by using harmonic interpolation. Meanwhile our system provides real-time algorithms for both control curve generation and the subsequent surface optimization. With the incorporation of topological manipulation into geometrical modeling, we show that automatically generated models are both beneficial and feasible.  相似文献   

17.
The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing x-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D data set or object as the input, creates an intermediate light field, and outputs a special 3D volume data set called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.  相似文献   

18.
19.
Going beyond established desktop interfaces, researchers have begun re‐thinking visualization approaches to make use of alternative display environments and more natural interaction modalities. In this paper, we investigate how spatially‐aware mobile displays and a large display wall can be coupled to support graph visualization and interaction. For that purpose, we distribute typical visualization views of classic node‐link and matrix representations between displays. The focus of our work lies in novel interaction techniques that enable users to work with personal mobile devices in combination with the wall. We devised and implemented a comprehensive interaction repertoire that supports basic and advanced graph exploration and manipulation tasks, including selection, details‐on‐demand, focus transitions, interactive lenses, and data editing. A qualitative study has been conducted to identify strengths and weaknesses of our techniques. Feedback showed that combining mobile devices and a wall‐sized display is useful for diverse graph‐related tasks. We also gained valuable insights regarding the distribution of visualization views and interactive tools among the combined displays.  相似文献   

20.
Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi‐layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号