首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Interacting with public displays involves more than what happens between individuals and the system; it also concerns how people experience others around and through those displays. In this paper, we use “performance” as an analytical lens for understanding experiences with a public display called rhythIMs and explore how displays shift social interaction through their mediation. By performance, we refer to a situation in which people are on display and orient themselves toward an audience that may be co-located, imagined, or virtual. To understand interaction with public displays, we use two related notions of collectives—audiences and groups—to highlight the ways in which people orient to each other through public displays. Drawing examples from rhythIMs, a public display that shows patterns of instant messaging and physical presence, we demonstrate that there can be multiple, heterogeneous audiences and show how people experience these different types of collectives in various ways. By taking a performance perspective, we are able to understand how audiences that were not physically co-present with participants still influenced participants’ interpretations and interactions with rhythIMs. This extension of the traditional notion of audience illuminates the roles audiences can play in a performance.  相似文献   

2.
3.
This article presents a scalable, semi-automated process for studying the usage of public displays. The process consists of gathering anonymous interaction and skeletal data of passersby during public display deployment and programmatically analyzing the data. This article demonstrates the use of the process with the analysis of the Information Wall, a gesture-controlled public information display. Information Wall was deployed in a university campus for one year and collected an extensive data set of more than 100 000 passersby. The main benefits of the process include (1) gathering of large data sets without considerable use of resources, (2) fast, semi-automated data analysis, and (3) applicability to studying the effects of long-term public display deployments. In analyzing the usage and passersby data of the Information Wall in our validation study, the main findings uncovered using the method were (i) most users were first-time users exploring the system, and not many returned to use the system again, and (ii) many users were accompanied by passive users who observed interaction from further away, which could suggest a case of multi-user interaction blindness. In the past, logged data has mainly been used as a supporting method for in situ observations and interviews, and its use has required a considerable amount of manual work. In this article, we argue that logged data analysis can be automated to complement other methods, particularly in the evaluation of long-term deployments.  相似文献   

4.
Scrolling interaction is a common and frequent activity allowing users to browse content that is initially off-screen. With the increasing popularity of touch-sensitive devices, gesture-based scrolling interactions (e.g., finger panning and flicking) have become an important element in our daily interaction vocabulary. However, there are currently no comprehensive user performance models for scrolling tasks on touch displays. This paper presents an empirical study of user performance in scrolling tasks on touch displays. In addition to three geometrical movement parameters—scrolling distance, display window size, and target width, we also investigate two other factors that could affect the performance, i.e., scrolling modes—panning and flicking, and feedback techniques—with and without distance feedback. We derive a quantitative model based on four formal assumptions that abstract the real-world scrolling tasks, which are drawn from the analysis and observations of user scrolling actions. The results of a control experiment reveal that our model generalizes well for direct-touch scrolling tasks, accommodating different movement parameters, scrolling modes and feedback techniques. Also, the supporting blocks of the model, the four basic assumptions and three important mathematical components, are validated by the experimental data. In-depth comparisons with existing models of similar tasks indicate that our model performs the best under different measurement criteria. Our work provides a theoretical foundation for modeling sophisticated scrolling actions, as well as offers insights into designing scrolling techniques for next-generation touch input devices.  相似文献   

5.
We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls.  相似文献   

6.
在PC集群驱动的多投影显示环境下,设计并实现了一个基于计算机视觉技术和数据手套的手势交互系统(gesture-based interaction system,GBIS).GBIS整合了满足色彩一致性和特征一致性的特征点跟踪算法来实现交互动作的捕捉,并配合数据手套的有限状态机输出,实现了实时的沉浸式手势交互.实验结果表明:当所用摄像头的采集频率不低于30帧时,该系统能够实时地跟踪手臂的交互动作,同时保持较高的跟踪精度和交互可靠性.  相似文献   

7.
8.
As more interactive surfaces enter public life, casual interactions from passersby are bound to increase. Most of these users can be expected to carry a mobile phone or PDA, which nowadays offers significant computing capabilities of its own. This offers new possibilities for interaction between these users’ private displays and large public ones. In this paper, we present a system that supports such casual interactions. We first explore a method to track mobile phones that are placed on a horizontal interactive surface by examining the shadows which are cast on the surface. This approach detects the presence of a mobile device, as opposed to any other opaque object, through the signal strength emitted by the built-in Bluetooth transceiver without requiring any modifications to the devices’ software or hardware. We then go on to investigate interaction between a Sudoku game running in parallel on the public display and on mobile devices carried by passing users. Mobile users can join a running game by placing their devices on a designated area. The only requirement is that the device is in discoverable Bluetooth mode. After a specific device has been recognized, a client software is sent to the device which then enables the user to interact with the running game. Finally, we explore the results of a study which we conducted to determine the effectiveness and intrusiveness of interactions between users on the tabletop and users with mobile devices.  相似文献   

9.
Variable-resolution display techniques present visual information in a display using more than one resolution. For example, gaze-contingent variable-resolution displays allocate computational resources for image generation preferentially to the area around the center of gaze, where visual sensitivity to detail is the greatest. Using such displays reduces the amount of computational resources required as compared with traditional uniform-resolution displays. The theoretical benefits, implementational issues, and behavioral consequences of variable-resolution displays are reviewed. A mathematical analysis of computational efficiency for a two-region variable-resolution display is conducted. The results are discussed in relation to applications that are limited by computational resources, such as virtual reality, and applications that are limited by bandwidth, such as internet image transmission. The potential for variable-resolution display techniques as a viable future technology is discussed.  相似文献   

10.
传统的显示设备在受限的物理空间内难以向用户呈现大量画面和复杂内容,而AR头戴式显示设备通过将三维的可视化内容悬浮显示在用户眼前,在不占用额外物理空间的条件下可增强真实世界的画面显示,呈现形式更为丰富的内容。设计AR虚拟空间与真实电脑画面虚实融合的桌面增强显示系统。通过基于二维码识别的空间定位技术将真实电脑画面映射至虚拟空间内,实现交互空间的统一,同时构建窗口布局计算模型使得系统可以根据用户自定义参数自动生成窗口并设置其布局。在此基础上,利用蓝牙通信、网络传输、操作系统底层映射和结合视线检测的语音识别等技术支持手势、键鼠和语音的多模态交互方式,设计鼠标移动策略以扩展鼠标在三维空间下的多种操作模式。实验结果表明,与隔空手势交互、鼠标交互等传统交互方式相比,该系统在处理常见电脑任务时平均耗时节省10%~30%,具有较高的交互效率,且在跨窗口连续移动和瞬间跳转时能够正确显示鼠标位置。  相似文献   

11.
Research of human-centered computing systems in industry should not avoid advances in visual display technology for safety, warning, and interaction. Novel 3D displays that present information in real depth offer potential benefits. Previous research has studied depth in visual search but depth was mostly not realized by real physical separation. Many areas of Human Factors could be augmented with the study and evaluation for operation of novel 3D displays. Such a study was presented to better understand the effects of real physical depth in association with depth redundantly coded with another feature (an additional mark on a target) distinguishing it from distracters and target location on visual search in a depth display. Target location was studied as the row or column in the visual field of view the target was positioned and also in terms of eccentricity outwardly from the center of the display. In general, depth was found to be of benefit when redundantly coded with another attribute for guiding attention. Targets were not found as fast when the target’s location was further from the center fixation point, and interactions between target distinction (depth, mark, or depth+mark) and target location provide implications for designers.  相似文献   

12.
The “Midas Touch” problem has long been a difficult problem existing in gesture-based interaction. This paper proposes a visual attention-based method to address this problem from the perspective of cognitive psychology. There are three main contributions in this paper: (1) a visual attention-based parallel perception model is constructed by combining top-down and bottom-up attention, (2) a framework is proposed for dynamic gesture spotting and recognition simultaneously, and (3) a gesture toolkit is created to facilitate gesture design and development. Experimental results show that the proposed method has a good performance for both isolated and continuous gesture recognition tasks. Finally, we highlight the implications of this work for the design and development of all gesture-based applications.  相似文献   

13.
Gesture-based interaction has become more affordable and ubiquitous as an interaction style in recent years. Since gesture-based interactions lead to fatigue and cause heaviness in upper limbs, a problem commonly known as ‘Gorilla-Arm Syndrome’ occurs. Then Bracelet is proposed, an arms-down selection method based on Kinect. Its purpose is reducing fatigue in a long mid-air gesture interaction session. An evaluation of 16 participants compared with previous methods such as mid-air gestures and other arms-down interactions showed the effectiveness of the Bracelet in reducing fatigue. As the Bracelet is helpful to alleviate fatigue in some situations where selection is intensive and has no time limit, it can be used as a ‘plug-in’ for other methods and applied for display in many public places such as airports, stations, shopping malls and waiting rooms.  相似文献   

14.
We present a novel approach for recreating life-like experiences through an easy and natural gesture-based interaction. By focusing on the locations and transforming the role of the user, we are able to significantly maximise the understanding of an ancient cultural practice, behaviour or event over traditional approaches. Technology-based virtual environments that display object reconstructions, old landscapes, cultural artefacts, and scientific phenomena are coming into vogue. In traditional approaches the user is a visitor navigating through these virtual environments observing and picking objects. However, cultural practices and certain behaviours from nature are not normally made explicit and their dynamics still need to be understood. Thus, our research idea is to bring such practices to life by allowing the user to enact them. This means that user may re-live a step-by-step process to understand a practice, behaviour or event. Our solution is to enable the user to enact using gesture-based interaction with sensor-based technologies such as the versatile Kinect. This allows easier and natural ways to interact in multidimensional spaces such as museum exhibits. We use heuristic approaches and semantic models to interpret human gestures that are captured from the user’s skeletal representation. We present and evaluate three applications. For each of the three applications, we integrate these interaction metaphors with gaming elements, thereby achieving a gesture-set to enact a cultural practice, behaviour or event. User evaluation experiments revealed that our approach achieved easy and natural interaction with an overall enhanced learning experience.  相似文献   

15.
A gesture-based interaction system for smart homes is a part of a complex cyber-physical environment, for which researchers and developers need to address major challenges in providing personalized gesture interactions. However, current research efforts have not tackled the problem of personalized gesture recognition that often involves user identification. To address this problem, we propose in this work a new event-driven service-oriented framework called gesture services for cyber-physical environments (GS-CPE) that extends the architecture of our previous work gesture profile for web services (GPWS). To provide user identification functionality, GS-CPE introduces a two-phase cascading gesture password recognition algorithm for gesture-based user identification using a two-phase cascading classifier with the hidden Markov model and the Golden Section Search, which achieves an accuracy rate of 96.2% with a small training dataset. To support personalized gesture interaction, an enhanced version of the Dynamic Time Warping algorithm with multiple gestural input sources and dynamic template adaptation support is implemented. Our experimental results demonstrate the performance of the algorithm can achieve an average accuracy rate of 98.5% in practical scenarios. Comparison results reveal that GS-CPE has faster response time and higher accuracy rate than other gesture interaction systems designed for smart-home environments.  相似文献   

16.
This study used content analysis of journal articles from 2001 to 2013 to explore the characteristics and trends of empirical research on gesture-based computing in education. Among the 3018 articles retrieved from 5 academic databases by a comprehensive search, 59 articles were identified manually and then analyzed. The distribution and trends analyzed were research methods, study disciplines, learning content, technology used, and intended settings of the gesture-based learning systems. Furthermore, instructional interventions were also analyzed based on the learning context or the sub-education domain to which they belonged to ascertain if any instructional intervention was applied in these systems. It was found that experimental design research is the most commonly used method (72.9%) followed by design-based research (20.3%). The findings indicate that Nintendo Wii is the gesture-based device that is the most often used (40%), while the domain in which the technology is most frequently used is special education (42.4%). The same trend was also found in a further analysis which identified that the domain which uses Wii the most is special education (70%). Among all the identified learning topics, motor skills learning has the highest percentage (44%). When grouping these topics into three domains of knowledge (procedural, conceptual, and both), the result demonstrates that both procedural and conceptual type of knowledge are equally distributed in the gesture-based learning studies. Finally, a comparison of instructional intervention of gesture-based learning systems in different sub-education domains is reported.  相似文献   

17.
Traditionally, gesture-based interaction in virtual environments is composed of either static, posture-based gesture primitives or temporally analyzed dynamic primitives. However, it would be ideal to incorporate both static and dynamic gestures to fully utilize the potential of gesture-based interaction. To that end, we propose a probabilistic framework that incorporates both static and dynamic gesture primitives. We call these primitives Gesture Words (GWords). Using a probabilistic graphical model (PGM), we integrate these heterogeneous GWords and a high-level language model in a coherent fashion. Composite gestures are represented as stochastic paths through the PGM. A gesture is analyzed by finding the path that maximizes the likelihood on the PGM with respect to the video sequence. To facilitate online computation, we propose a greedy algorithm for performing inference on the PGM. The parameters of the PGM can be learned via three different methods: supervised, unsupervised, and hybrid. We have implemented the PGM model for a gesture set of ten GWords with six composite gestures. The experimental results show that the PGM can accurately recognize composite gestures.  相似文献   

18.
In recent years, consumers have witnessed a technological revolution that has delivered more-realistic experiences in their own homes through high-definition, stereoscopic televisions and natural, gesture-based video game consoles. Although these experiences are more realistic, offering higher levels of fidelity, it is not clear how the increased display and interaction aspects of fidelity impact the user experience. Since immersive virtual reality (VR) allows us to achieve very high levels of fidelity, we designed and conducted a study that used a six-sided CAVE to evaluate display fidelity and interaction fidelity independently, at extremely high and low levels, for a VR first-person shooter (FPS) game. Our goal was to gain a better understanding of the effects of fidelity on the user in a complex, performance-intensive context. The results of our study indicate that both display and interaction fidelity significantly affect strategy and performance, as well as subjective judgments of presence, engagement, and usability. In particular, performance results were strongly in favor of two conditions: low-display, low-interaction fidelity (representative of traditional FPS games) and high-display, high-interaction fidelity (similar to the real world).  相似文献   

19.
Spatially aware handheld displays are a promising approach to interact with complex information spaces in a more natural way by extending the interaction space from the 2D surface to the 3D physical space around them. This is achieved by utilizing their spatial position and orientation for interaction purposes. Technical solutions for spatially tracked displays already exist in research laboratories, e.g., embedded in a tabletop environment. Along with a large stationary screen, such multi-display systems provide a rich design space with a variety of benefits to users, e.g., the explicit support of co-located parallel work and collaboration. As we see a great future in the underlying interaction principles, the question is how the technology can be made accessible to the public. With our work, we want to address this issue. In the long term, we envision a low-cost tangible display ecosystem that is suitable for everyday usage and supports both active displays (e.g., the iPad) and passive projection media (e.g., paper screens and everyday objects such as a mug). The two major contributions of this article are a presentation of an exciting design space and a requirement analysis regarding its technical realization with special focus on a broad adoption by the public. In addition, we present a proof of concept system that addresses one technical aspect of this ecosystem: the spatial tracking of tangible displays with a consumer depth camera (Kinect).  相似文献   

20.
In the ubiquitous computing environment, people will interact with everyday objects (or computers embedded in them) in ways different from the usual and familiar desktop user interface. One such typical situation is interacting with applications through large displays such as televisions, mirror displays, and public kiosks. With these applications, the use of the usual keyboard and mouse input is not usually viable (for practical reasons). In this setting, the mobile phone has emerged as an excellent device for novel interaction. This article introduces user interaction techniques using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared displays. In particular, we consider two specific but typical situations (1) sharing the display from a distance and (2) interacting with a touch screen display at a close distance. Using two basic computer vision techniques, motion flow and marker recognition, we show how a camera-equipped hand-held device can effectively be used to replace a mouse and share, select, and manipulate 2D and 3D objects, and navigate within the environment presented through the large display.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号