首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A taxonomy for and analysis of multi-person-display ecosystems   总被引:2,自引:2,他引:0  
Interactive displays are increasingly being distributed in a broad spectrum of everyday life environments: they have very diverse form factors and portability characteristics, support a variety of interaction techniques, and can be used by a variable number of people. The coupling of multiple displays creates an interactive “ecosystem of displays”. Such an ecosystem is suitable for particular social contexts, which in turn generates novel settings for communication and performance and challenges in ownership. This paper aims at providing a design space that can inform the designers of such ecosystems. To this end, we provide a taxonomy that builds on the size of the ecosystem and on the degree of individual engagement as dimensions. We recognize areas where physical constraints imply certain kinds of social engagement, versus other areas where further work on interaction techniques for coupling displays can open new design spaces.  相似文献   

2.
In the ubiquitous computing environment, people will interact with everyday objects (or computers embedded in them) in ways different from the usual and familiar desktop user interface. One such typical situation is interacting with applications through large displays such as televisions, mirror displays, and public kiosks. With these applications, the use of the usual keyboard and mouse input is not usually viable (for practical reasons). In this setting, the mobile phone has emerged as an excellent device for novel interaction. This article introduces user interaction techniques using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared displays. In particular, we consider two specific but typical situations (1) sharing the display from a distance and (2) interacting with a touch screen display at a close distance. Using two basic computer vision techniques, motion flow and marker recognition, we show how a camera-equipped hand-held device can effectively be used to replace a mouse and share, select, and manipulate 2D and 3D objects, and navigate within the environment presented through the large display.  相似文献   

3.
Recent developments in microelectronics have encouraged the use of 3D data bases to create compelling volumetric renderings of graphical objects. However, even with the computational capabilities of current-generation graphical systems, real-time displays of such objects are difficult, particularly when dynamic spatial transformations are involved. In this paper we discuss a type of visual stimulus (the stereokinetic effect display) that is computationally far less complex than a true three-dimensional transformation but yields an equally compelling depth impression, often perceptually indiscriminable from the true spatial transformation. Several possible applications for this technique are discussed (e.g., animating contour maps and air traffic control displays so as to evoke accurate depth percepts).  相似文献   

4.
These days we are witnessing a spread of many new digital systems in public spaces featuring easy to use and engaging interaction modalities, such as multi-touch, gestures, tangible, and voice. This new user-centered paradigm — known as the NUI — aims to provide a more natural and rich experience to end users; this supports its adoption in many ubiquitous domains, as it naturally holds for Pervasive Displays: these systems are composed of variously-sized displays and support many-to-many interactions with the same public screens at the same time. Due to their public and moderated nature, users need an easy way of adapting them to heterogeneous usage contexts in order to support their long-term adoption. In this paper, we propose an End-User Development approach to this problem introducing TAPAS, a system that combines a tangible interaction with a puzzle metaphor, allowing users to create workflows on a Pervasive Display to satisfy their needs; its design and visual syntax stem from a study we carried out with designers, whose findings are also part of this work. We then carried out a preliminary evaluation of our system with second year university students and interaction designers, gathering useful feedback to improve TAPAS and employ it in many other domains.  相似文献   

5.
Bennett KB  Malek DA 《Human factors》2000,42(3):432-450
Animated mimic displays can be used to present system information regarding physical form, function, and causality. However, a potential limitation in current designs has been identified: the presence of ambiguous apparent motion. Two theoretical explanations of ambiguous apparent motion are discussed (Fourier and correspondence hypotheses). Two alternative designs (stair-step and approximate sinusoid luminance waveforms) were evaluated. The velocity matches obtained in Experiment 1 indicate that the sinusoidal waveform produced significantly better performance for both accuracy and latency than the stair-step wave-form. The velocity estimates obtained in Experiment 2 indicate that ambiguous apparent motion was not visible with the sinusoidal waveform, but was with the stair-step waveform. One of the two hypotheses (correspondence) provides a reasonable fit with the obtained velocity estimates. A fundamental goal in the design of animated mimic displays is to provide unambiguous mappings between perceived velocity and actual flow rates. Critical factors in design (e.g., waveform, chromatic/luminance contrast, spatial/temporal frequency) are discussed. Actual or potential applications of this research include the design of more effective animated mimic displays.  相似文献   

6.
Various 3D displays have been proposed to show realistic and vivid 3D images. Moreover, 3D displays have been applied in various fields including medicine, entertainment, and advertising. Depending on the application, 3D displays have different pixel structures and sizes. In this paper, we present a 3D-display design method that can be applied regardless of the pixel structure and display sizes. The area of the designable 3D display is suggested by the improved 3D image quality. The manufactured displays are used to verify the proposed method. Furthermore, a light field simulation is performed to confirm the area that was not proven by the manufactured displays. With the proposed 3D image-quality model and 3D image simulation by the light field representation, a general design of 3D displays with various pixel structures can be developed.  相似文献   

7.
Scrolling interaction is a common and frequent activity allowing users to browse content that is initially off-screen. With the increasing popularity of touch-sensitive devices, gesture-based scrolling interactions (e.g., finger panning and flicking) have become an important element in our daily interaction vocabulary. However, there are currently no comprehensive user performance models for scrolling tasks on touch displays. This paper presents an empirical study of user performance in scrolling tasks on touch displays. In addition to three geometrical movement parameters—scrolling distance, display window size, and target width, we also investigate two other factors that could affect the performance, i.e., scrolling modes—panning and flicking, and feedback techniques—with and without distance feedback. We derive a quantitative model based on four formal assumptions that abstract the real-world scrolling tasks, which are drawn from the analysis and observations of user scrolling actions. The results of a control experiment reveal that our model generalizes well for direct-touch scrolling tasks, accommodating different movement parameters, scrolling modes and feedback techniques. Also, the supporting blocks of the model, the four basic assumptions and three important mathematical components, are validated by the experimental data. In-depth comparisons with existing models of similar tasks indicate that our model performs the best under different measurement criteria. Our work provides a theoretical foundation for modeling sophisticated scrolling actions, as well as offers insights into designing scrolling techniques for next-generation touch input devices.  相似文献   

8.
To answer the question: “what is 3D good for?” we reviewed the body of literature concerning the performance implications of stereoscopic 3D (S3D) displays versus non-stereo (2D or monoscopic) displays. We summarized results of over 160 publications describing over 180 experiments spanning 51 years of research in various fields including human factors psychology/engineering, human–computer interaction, vision science, visualization, and medicine. Publications were included if they described at least one task with a performance-based experimental evaluation of an S3D display versus a non-stereo display under comparable viewing conditions. We classified each study according to the experimental task(s) of primary interest: (a) judgments of positions and/or distances; (b) finding, identifying, or classifying objects; (c) spatial manipulations of real or virtual objects; (d) navigation; (e) spatial understanding, memory, or recall and (f) learning, training, or planning. We found that S3D display viewing improved performance over traditional non-stereo (2D) displays in 60% of the reported experiments. In 15% of the experiments, S3D either showed a marginal benefit or the results were mixed or unclear. In 25% of experiments, S3D displays offered no benefit over non-stereo 2D viewing (and in some rare cases, harmed performance). From this review, stereoscopic 3D displays were found to be most useful for tasks involving the manipulation of objects and for finding/identifying/classifying objects or imagery. We examine instances where S3D did not support superior task performance. We discuss the implications of our findings with regard to various fields of research concerning stereoscopic displays within the context of the investigated tasks.  相似文献   

9.
Interacting with public displays involves more than what happens between individuals and the system; it also concerns how people experience others around and through those displays. In this paper, we use “performance” as an analytical lens for understanding experiences with a public display called rhythIMs and explore how displays shift social interaction through their mediation. By performance, we refer to a situation in which people are on display and orient themselves toward an audience that may be co-located, imagined, or virtual. To understand interaction with public displays, we use two related notions of collectives—audiences and groups—to highlight the ways in which people orient to each other through public displays. Drawing examples from rhythIMs, a public display that shows patterns of instant messaging and physical presence, we demonstrate that there can be multiple, heterogeneous audiences and show how people experience these different types of collectives in various ways. By taking a performance perspective, we are able to understand how audiences that were not physically co-present with participants still influenced participants’ interpretations and interactions with rhythIMs. This extension of the traditional notion of audience illuminates the roles audiences can play in a performance.  相似文献   

10.
In this paper, we explore the use of peoples’ shopping data to raise awareness and to enable reflection about nutrition. In order to ground our Nutriflect approach, we conducted 125 structured interviews in grocery stores. Informed by the results of this exploratory study, we designed a system that shows a household’s collective food consumption patterns via situated displays in the home and through mobile devices in-store. The system aimed to minimize the need for manual entry of nutrition-related data by the users. To evaluate our system, we conducted a 4-week field study in eight households with 21 inhabitants and situated in-store shopping inquiries with a subset of 9 of these users, using actual shopping data from participants. In these studies, we identified issues regarding the interaction design of Internet of Things applications and explored the use of complementary distributed displays to provide tailored cues in context. The approach taken showed the potential to foster reflection about shopping and nutritional choices and for integration with people’s everyday practices.  相似文献   

11.
When completing tasks in complex, dynamic domains observers must consider the relationships among many variables (e.g., integrated tasks) as well as the values of individual variables (e.g., focused tasks). A critical issue in display design is whether or not a single display format can achieve the dual design goals of supporting performance at both types of tasks. We consider this issue from a variety of perspectives. One relevant perspective is the basic research on attention and object perception, which concentrates on the interaction between visual features and processing capabilities. The principles of configurality are discussed, with the conclusion that they support the possibility of achieving the dual design goals. These considerations are necessary but not sufficient for effective display design. Graphic displays map information from a domain into visual features; the tasks to be completed are defined in terms of the domain, not in terms of the visual features alone. The implications of this subtle but extremely important difference are discussed. The laboratory research investigating alternative display formats is reviewed. Much like the attention literature, the results do not rule out the possibility that the dual design goals can be achieved.  相似文献   

12.
ABSTRACT

Most studies on tangible user interfaces for the tabletop design systems are being undertaken from a technology viewpoint. Although there have been studies that focus on the development of new interactive environments employing tangible user interfaces for designers, there is a lack of evaluation with respect to designers' spatial cognition. In this research we study the effects of tangible user interfaces on designers' spatial cognition to provide empirical evidence for the anecdotal views of the effect of tangible user interfaces. To highlight the expected changes in spatial cognition while using tangible user interfaces, we compared designers using a tangible user interface on a tabletop system with 3D blocks to designers using a graphical user interface on a desktop computer with a mouse and keyboard. The ways in which designers use the two different interfaces for 3D design were examined using a protocol analysis method. The result reveals that designers using 3D blocks perceived more spatial relationships among multiple objects and spaces and discovered new visuo-spatial features when revisiting their design configurations. The designers using the tangible interfaces spent more time in relocating objects to different locations to test the moves, and interacted with the external representation through large body movements implying an immersion in the design model. These two physical actions assist in designers' spatial cognition by reducing cognitive load in mental visual reasoning. Further, designers using the tangible interfaces spent more time in restructuring the design problem by introducing new functional issues as design requirements and produced more discontinuities to the design processes, which provides opportunity for reflection and modification of the design. Therefore this research shows that tangible user interfaces changes designers' spatial cognition, and the changes of the spatial cognition are associated with creative design processes.  相似文献   

13.
The Digital Vision Touch (DViT) system uses smart cameras to determine where a person touches a large display, thereby allowing intuitive human-computer interaction. The cameras process the collected images in such a way as to recognize various object attributes, such as location relative to the display in 3D space. The system can then use this information in feedback to the computer generating the display, enabling touch control of the application. When we touch-enable large displays, multiuser collaboration and the ability to detect pen or finger contact are desirable functions. DViT is a touch-enabling technology with this capability and facilitates human-computer interaction in a natural way. The system we created works with a variety of display sizes - both large and wall-sized formats - and accommodates multiple users simultaneously.  相似文献   

14.
Teleimmersion is an emerging technology that enables users to collaborate remotely by generating realistic 3D avatars in real time and rendering them inside a shared virtual space. The teleimmersive environment thus provides a venue for collaborative work on 3D data such as medical imaging, scientific data and models, archaeological datasets, architectural or mechanical designs, remote training (e.g., oil rigs, military applications), and remote teaching of physical activities (e.g., rehabilitation, dance). In this paper, we present our research work performed over the course of several years in developing the teleimmersive technology using image-based stereo and more recently Kinect. We outline the issues pertaining to the capture, transmission, rendering, and interaction. We describe several applications where we have explored the use of the 3D teleimmersion for remote interaction and collaboration among professional and scientific users. We believe the presented findings are relevant for future developers in teleimmersion and apply across various 3D video capturing technologies.  相似文献   

15.
16.
In this paper we describe the design and architecture of an adaptive proactive environment in which information, which reflects the communal interests of current inhabitants, is proactively displayed on large-scale public displays. Adaptation is achieved through implicit communication between the environment and personal sensor devices worn by users. These devices, called Pendle, serve two purposes: they store and make available to the environment user preferences, and they allow users to override the environment's proactive behavior by means of simple gestures. The result is a smooth integration of environment-controlled interaction (experienced by the user as implicit interaction, triggered by their presence) and user-controlled explicit interaction. Initial results show that user-controlled adaptation leads to an engaging user experience that is unobtrusive and not distracting.  相似文献   

17.
According to the vision of Ambient Intelligence, technology will seamlessly merge into people’s everyday activities and environments. A challenge facing designers of such systems is to create interfaces that fit in people’s everyday contexts and incorporate the values of daily life. This paper focuses on tangible expressive interaction as one possible approach towards linking everyday experiences to intuitive forms of interaction and presents a number of principles for expressive interaction design in this field. A case study of a tangible expressive interface to control a living room atmosphere projection system (orchestrating living room lighting, audio and video-art) is presented to illustrate and reflect upon the design principles. Furthermore, the case study describes possible techniques towards integrating the design principles into a design method.  相似文献   

18.
Interactive stereo displays allow for the existence of a natural interaction between the user and the stereo images depicted on the display. In the type of display discussed here, this interaction takes the form of tracking the user's head and hand/arm position. Sensing the user's head position allows for the creation of motion parallax information, an immersive depth cue that can be added to the binocular parallax already present in the display. Sensing the user's hand or arm position allows the user to manipulate the spatial attributes of virtual objects and scenes presented on the display, which can enhance spatial reasoning. Moreover, allowing the user to manipulate virtual objects may permit the creation of a sense of spatial relations among elements in the display via proprioception, which may augment the two parallax cues. The congruence among binocular parallax, motion parallax, and proprioception should increase the sense of depth in the display and increase viewing comfort, as well as enhance the ability of our intuitive reasoning system to make reasoned sense out of the perceptual information. These advantages should make interactive stereo displays, which may be classified as a form of cognitive enhancement display, the display of choice in the future.  相似文献   

19.
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.  相似文献   

20.
In this paper we present TangiWheel,a collection manipulation widget for tabletop displays.Our implementation is flexible,allowing either multi-touch or interaction,or even a hybrid scheme to better suit user choice and convenience.Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation.The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles.Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g.,involving a typical master-slave exploration pattern).The results show that tangibles perform significantly better than fingers,despite dealing with a greater number of interactions,in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation.However,when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks,for instance when collections are fixed in the interface layout,touch input is significantly better in terms of required time and number of actions.Finally,when a more elastic collection layout or more complex additional insertion or displacement operations are needed,the hybrid and tangible approaches clearly outperform finger-based interactions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号