首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distributable user interfaces enable users to distribute user interface interaction objects (i.e. panels, buttons, input fields, checkboxes, etc.) across different displays using a set of distribution primitives to manipulate them in real time. This work presents how this kind of user interfaces facilitates the computer supported collaborative learning in modern classrooms. These classrooms provide teachers and students with display ecosystems consisting of stationary displays, such as smart projectors and smart TVs as well as mobile displays owned by teachers and students, such as smartphones, tablets, and laptops. The distribution of user interface interaction objects enables teachers to modify the user interface interaction objects that are available to students in real time to control and promote the collaboration and participation among them during learning activities. We propose the development of this type of applications using an extension of the CAMELEON reference framework that supports the definition of UI distribution models. The Essay exercise is presented as a case of study where teachers control the collaboration among students by distributing user interface interaction objects.  相似文献   

2.
In the ubiquitous computing environment, people will interact with everyday objects (or computers embedded in them) in ways different from the usual and familiar desktop user interface. One such typical situation is interacting with applications through large displays such as televisions, mirror displays, and public kiosks. With these applications, the use of the usual keyboard and mouse input is not usually viable (for practical reasons). In this setting, the mobile phone has emerged as an excellent device for novel interaction. This article introduces user interaction techniques using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared displays. In particular, we consider two specific but typical situations (1) sharing the display from a distance and (2) interacting with a touch screen display at a close distance. Using two basic computer vision techniques, motion flow and marker recognition, we show how a camera-equipped hand-held device can effectively be used to replace a mouse and share, select, and manipulate 2D and 3D objects, and navigate within the environment presented through the large display.  相似文献   

3.
In this article, we describe a multi-layered architecture for sketch-based interaction within virtual environments. Our architecture consists of eight hierarchically arranged layers that are described by giving examples of how they are implemented and how they interact. Focusing on table-like projection systems (such as Virtual Tables or Responsive Workbenches) as human-centered output-devices, we show examples of how to integrate parts or all of the architecture into existing domain-specific applications — rather than realizing new general sketch applications — to make sketching an integral part of the next-generation human–computer interface.  相似文献   

4.
When interacting in a virtual environment, users are confronted with a number of interaction techniques. These interaction techniques may complement each other, but in some circumstances can be used interchangeably. Because of this situation, it is difficult for the user to determine which interaction technique to use. Furthermore, the use of multimodal feedback, such as haptics and sound, has proven beneficial for some, but not all, users. This complicates the development of such a virtual environment, as designers are not sure about the implications of the addition of interaction techniques and multimodal feedback. A promising approach for solving this problem lies in the use of adaptation and personalization. By incorporating knowledge of a user’s preferences and habits, the user interface should adapt to the current context of use. This could mean that only a subset of all possible interaction techniques is presented to the user. Alternatively, the interaction techniques themselves could be adapted, e.g. by changing the sensitivity or the nature of the feedback. In this paper, we propose a conceptual framework for realizing adaptive personalized interaction in virtual environments. We also discuss how to establish, verify and apply a user model, which forms the first and important step in implementing the proposed conceptual framework. This study results in general and individual user models, which are then verified to benefit users interacting in virtual environments. Furthermore, we conduct an investigation to examine how users react to a specific type of adaptation in virtual environments (i.e. switching between interaction techniques). When an adaptation is integrated in a virtual environment, users positively respond to this adaptation as their performance significantly improve and their level of frustration decrease.  相似文献   

5.
A virtual human is an effective interface for interacting with users and plays an important role in carrying out certain tasks. As social networking sites are getting more and more popular, we propose a Facebook aware virtual human. The social networking sites are used to empower virtual humans for interpersonal conversational interaction in this paper. We combine Internet world, physical world and 3D virtual world together to create a new interface for users to interact with an autonomous virtual human which can behave like a real modern human. In order to take advantages of social networking sites, virtual human gathers information of a user from its profile, its likes, dislikes and gauge mood from most recent status update. In two user studies, we investigated whether and how this new interface can enhance human–virtual human interaction. Some positive results concluded from these studies will be guidelines on research and development of future virtual human interfaces.  相似文献   

6.
The interactive behavior of context-aware applications depends on the physical and logical context in which the interaction occurs. The main difference between traditional HCI design and context-aware design is how we deal with context. In this article, we pick up on recent ubicomp community trends, drawing from sociology and focusing on interaction's communicative aspects. So, starting from the premise that interaction is communication, we propose a new interaction model for context-aware applications. We then derive an architectural framework that developers can use to implement our interaction model. The main benefit of our architecture is that, by modeling context in the user interface, developers can represent the application's inferences visually for users  相似文献   

7.
IMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.  相似文献   

8.
Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi‐user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
Increasingly, computers are becoming tools of communication, information exploring and studying for young people, regardless of their abilities. Scientists have been building knowledge on how blind people can substitute hearing or touch for sight or how the combination of senses, i.e., multimodalities, can provide the user with an effective way of exploiting the power of computers. Evaluation of such multimodal user interfaces in the right context, i.e., appropriate users, tasks, tools and environment, is essential to give designers accurate feedback on blind users’ needs. This paper presents a study on how young blind people use computers for everyday tasks with the aids of assistive technologies, aiming to understand what hindrances they encounter when interacting with a computer using individual senses, and what supports them. A common assistive technology is a screen reader, producing output to a speech synthesizer or a Braille display. Those two modes are often used together, but the research studied how visually impaired students interact with computers using either form, i.e., a speech synthesizer or a Braille display. A usability test has been performed to assess blind grade-school students’ ability to carry out common tasks with the help of a computer, including solving mathematical problems, navigating the web, communicating with e-mail and using word processing. During the usability tests, students were allowed to use either auditory mode or tactile mode. Although blind users most commonly use a speech synthesizer (audio), the results indicate that this was not always the most suitable modality. While the effectiveness of the Braille display (tactile user interface) to accomplish certain tasks was similar to that of the audio user interface, the users’ satisfaction rate was higher. The contribution of this work lies in answering two research questions by analysing two modes of interaction (tactile and speech), while carrying out tasks of varying genre, i.e., web searching, collaboration through e-mail, word processing and mathematics. A second contribution of this work is the classification of observations into four categories: usability and accessibility, software fault, cognitive mechanism and learning method. Observations, practical recommendations and open research problems are then presented and discussed. This provides a framework for similar studies in the future. A third contribution of this work is the elaboration of practical recommendations for user interface designers and a research agenda for scientists.  相似文献   

10.
The human-computer interaction community has often focused on the face applications present to the user. The authors present a new framework that monitors higher-level events to learn how people access, create, and modify information rather than how they use applications. By associating information sources with tasks and processes and monitoring user actions, the system can offer task-specific help  相似文献   

11.
While a lot of progress has been made in improving analyses and tools that aid software development, less effort has been spent on studying how such tools are commonly used in practice. A study into a tool's usage is important not only because it can help improve the tool's usability but also because it can help improve the tool's underlying analysis technology in a common usage scenario. This paper presents a study that explores how (beginner) users work with the Alloy Analyzer, a tool for automatic analysis of software models written in Alloy, a first-order, declarative language. Alloy has been successfully used in research and teaching for several years, but there has been no study of how users interact with the analyzer. We have modified the analyzer to log (some of) its interactions with the user. Using this modified analyzer, 11 students in two graduate classes formulated their Alloy models to solve a problem set (involving two problems, each with one model). Our analysis of the resulting logs (total of 68 analyzer sessions) shows several interesting observations; based on them, we propose how to improve the analyzer, both the performance of analyses and the user interaction. Specifically, we show that: (i) users often perform consecutive analyses with slightly different models, and thus incremental analysis can speed up the interaction; (ii) users' interaction with the analyzer is sometimes predictable, and akin to continuous compilation, the analyzer can precompute the result of a future action while the user is editing the model; and (iii) (beginner) users can naturally develop semantically equivalent models that have significantly different analysis time, so it is useful to study manual and automatic model transformations that can improve performance.  相似文献   

12.
Although multi-touch applications and user interfaces have become increasingly common in the last few years, there is no agreed-upon multi-touch user interface language yet. In order to gain a deeper understanding of the design of multi-touch user interfaces, this paper presents semiotic analysis of multi-touch applications as an interesting approach to gain deeper understanding of the way users use and understand multi-touch interfaces. In a case study example, user tests of a multi-touch tabletop application platform called MuTable are analysed with the Communicability Evaluation Method to evaluate to what extent users understand the intended messages (e.g., cues about interaction and functionality) the MuTable platform communicates. The semiotic analysis of this case study shows that although multi-touch interfaces can facilitate user exploration, the lack of well-known standards in multi-touch interface design and in the use of gestures makes the user interface difficult to use and interpret. This conclusion points to the importance of the elusive balance between letting users explore multi-touch systems on their own on one hand, and guiding users, explaining how to use and interpret the user interface, on the other.  相似文献   

13.
The development of IP-Telephony in recent years has been substantial. The improvement in voice quality, the integration between voice and data, especially the interaction with multimedia has made the 3G communication more promising. The value added services of Telephony techniques alleviate the dependence on the phone and provide a universal platform for the multimodal telephony applications. For example, the web-based application with VoiceXML has been developed to simplify the human–machine interaction because it takes the advantage of the speech-enabled services and makes the telephone-web access a reality. However, it is not cost-efficient to build voice only stand-alone web application and is more reasonable that voice interfaces should be retrofitted to be compatible or collaborate with the existing HTML or XML-based web applications. Therefore, this paper considers that the functionality of the web service should enable multiple access modalities so that users can perceive and interact with the site in either visual or speech response simultaneously. Under this principle, our research develops a prototype system of multimodal VoIP with the integrated web-based Mandarin dialog system which adopts automatic speech recognition (ASR), text-to-speech (TTS), VoiceXML browser, and VoIP technologies to create user friendly graphic user interface (GUI) and voice user interface (VUI). The users can use traditional telephone, cellular phone, or even VoIP connection via personal computer to interact with the VoiceXML server. In the mean time, the users browse the web and access the same content with common HTML or XML-based browser. The proposed system shows excellent performance and can be easily incorporated into voice ordering service for a wider accessibility.  相似文献   

14.
The context of mobility raises many issues for geospatial applications providing location-based services. Mobile device limitations, such as small user interface footprint and pen input whilst in motion, result in information overload on such devices and interfaces which are difficult to navigate and interact with. This has become a major issue as mobile GIS applications are now being used by a wide group of users, including novice users such as tourists, for whom it is essential to provide easy-to-use applications. Despite this, comparatively little research has been conducted to address the mobility problem. We are particularly concerned with the limited interaction techniques available to users of mobile GIS which play a primary role in contributing to the complexity of using such an application whilst mobile. As such, our research focuses on multimodal interfaces as a means to present users with a wider choice of modalities for interacting with mobile GIS applications. Multimodal interaction is particularly advantageous in a mobile context, enabling users of location-based applications to choose the mode of input that best suits their current task and location. The focus of this article concerns a comprehensive user study which demonstrates the benefits of multimodal interfaces for mobile geospatial applications.  相似文献   

15.
Recent user interface concepts, such as multimedia, multimodal, wearable, ubiquitous, tangible, or augmented-reality-based (AR) interfaces, each cover different approaches that are all needed to support complex human–computer interaction. Increasingly, an overarching approach towards building what we call ubiquitous augmented reality (UAR) user interfaces that include all of the just mentioned concepts will be required. To this end, we present a user interface architecture that can form a sound basis for combining several of these concepts into complex systems. We explain in this paper the fundamentals of DWARFs user interface framework (DWARF standing for distributed wearable augmented reality framework) and an implementation of this architecture. Finally, we present several examples that show how the framework can form the basis of prototypical applications.  相似文献   

16.
One of the driving applications of ubiquitous computing is universal appliance interaction: the ability to use arbitrary mobile devices to interact with arbitrary appliances, such as TVs, printers, and lights. Because of limited screen real estate and the plethora of devices and commands available to the user, a central problem in achieving this vision is predicting which appliances and devices the user wishes to use next in order to make interfaces for those devices available. We believe that universal appliance interaction is best supported through the deployment of appliance user interfaces (UIs) that are personalized to a users habits and information needs. In this paper, we suggest that, in a truly ubiquitous computing environment, the user will not necessarily think of devices as separate entities; therefore, rather than focus on which device the user may want to use next, we present a method for automatically discovering the users common tasks (e.g., watching a movie, or surfing TV channels), predicting the task that the user wishes to engage in, and generating an appropriate interface that spans multiple devices. We have several results. We show that it is possible to discover and cluster collections of commands that represent tasks and to use history to predict the next task reliably. In fact, we show that moving from devices to tasks is not only a useful way of representing our core problem, but that it is, in fact, an easier problem to solve. Finally, we show that tasks can vary from user to user.  相似文献   

17.
This paper presents a novel investigation of the effectiveness of haptic feedback for designing a class of interconnected multi-body systems such as passive mechanisms. The traditional application of haptic feedback in the design process has been in applications such as parts assembly or mold design. The design of the mechanism discussed in this paper is for applications where the user needs to manipulate the mechanism in order to interact with an environment. The objective of the design is to have the link ratios so that it can allow the user better movement control of the mechanism and thus give a better force amplification when there is a sudden change in the contact reaction force with the application environment. A haptic device is used as a design interface between the designer of such mechanisms and the virtual mechanism model. For this preliminary investigation, we used a four-bar mechanism. In our case study, we choose, as an example, to use the net distance travel of a tool when penetrating inside a model of a deformable surface as the design objective to minimize. The effects on the variation of this distance travelled can then be studied by adjusting some of the key design parameters used in the mechanism. To evaluate our proposed haptic-aided design environment, an informal preliminary user study was conducted, where each subject explored a sampled design space of the mechanism. The results of the user study suggest that the usage of a haptic device in the design of this class of mechanism can expedite the design process.  相似文献   

18.
Each middleware approach has one or more interaction models associated with it that determine how applications built on top of the middleware interact with each other. Message-oriented middleware (MOM) applications interact rather simply, for example, by posting messages to and retrieving messages from queues. Object-oriented middleware applications such as those based on Corba or Enterprise Java Beans (EJB) interact by invoking methods on distributed objects. Because interaction models significantly influence the types of abstractions a middleware system makes available to applications, they figure prominently in determining the breadth and variety of application integration that the middleware supports. As Web services evolve, they too will acquire standard interaction models; otherwise, their use will be limited to small-scale proprietary systems, rather than providing the standards-based "middleware for middleware" for uniting disparate islands of integration. At this point, however, the industry and standards bodies have yet to reach consensus on Web services interaction models. I consider some of the problems associated with a popular current approach to Web services interaction models  相似文献   

19.
Collaborative video searching on a tabletop   总被引:1,自引:0,他引:1  
Almost all system and application design for multimedia systems is based around a single user working in isolation to perform some task yet much of the work for which we use computers to help us, is based on working collaboratively with colleagues. Groupware systems do support user collaboration but typically this is supported through software and users still physically work independently. Tabletop systems, such as the DiamondTouch from MERL, are interface devices which support direct user collaboration on a tabletop. When a tabletop is used as the interface for a multimedia system, such as a video search system, then this kind of direct collaboration raises many questions for system design. In this paper we present a tabletop system for supporting a pair of users in a video search task and we evaluate the system not only in terms of search performance but also in terms of user–user interaction and how different user personalities within each pair of searchers impacts search performance and user interaction. Incorporating the user into the system evaluation as we have done here reveals several interesting results and has important ramifications for the design of a multimedia search system.  相似文献   

20.
User Modeling for Adaptive News Access   总被引:16,自引:0,他引:16  
We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号