首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Through the rapid spread of smartphones, users have access to many types of applications similar to those on desktop computer systems. Smartphone applications using augmented reality (AR) technology make use of users' location information. As AR applications will require new evaluation methods, improved usability and user convenience should be developed. The purpose of the current study is to develop usability principles for the development and evaluation of smartphone applications using AR technology. We develop usability principles for smartphone AR applications by analyzing existing research about heuristic evaluation methods, design principles for AR systems, guidelines for handheld mobile device interfaces, and usability principles for the tangible user interface. We conducted a heuristic evaluation for three popularly used smartphone AR applications to identify usability problems. We suggested new design guidelines to solve the identified problems. Then, we developed an improved AR application prototype of an Android-based smartphone, which later was conducted a usability testing to validate the effects of usability principles.  相似文献   

2.
For pt.1see ibid., vol. 9, p. 3 (2007). In this paper, the task and user interface modules of a multimodal dialogue system development platform are presented. The main goal of this work is to provide a simple, application-independent solution to the problem of multimodal dialogue design for information seeking applications. The proposed system architecture clearly separates the task and interface components of the system. A task manager is designed and implemented that consists of two main submodules: the electronic form module that handles the list of attributes that have to be instantiated by the user, and the agenda module that contains the sequence of user and system tasks. Both the electronic forms and the agenda can be dynamically updated by the user. Next a spoken dialogue module is designed that implements the speech interface for the task manager. The dialogue manager can handle complex error correction and clarification user input, building on the semantics and pragmatic modules presented in Part I of this paper. The spoken dialogue system is evaluated for a travel reservation task of the DARPA Communicator research program and shown to yield over 90% task completion and good performance for both objective and subjective evaluation metrics. Finally, a multimodal dialogue system which combines graphical and speech interfaces, is designed, implemented and evaluated. Minor modifications to the unimodal semantic and pragmatic modules were required to build the multimodal system. It is shown that the multimodal system significantly outperforms the unimodal speech-only system both in terms of efficiency (task success and time to completion) and user satisfaction for a travel reservation task  相似文献   

3.
Recent user interface concepts, such as multimedia, multimodal, wearable, ubiquitous, tangible, or augmented-reality-based (AR) interfaces, each cover different approaches that are all needed to support complex human–computer interaction. Increasingly, an overarching approach towards building what we call ubiquitous augmented reality (UAR) user interfaces that include all of the just mentioned concepts will be required. To this end, we present a user interface architecture that can form a sound basis for combining several of these concepts into complex systems. We explain in this paper the fundamentals of DWARFs user interface framework (DWARF standing for distributed wearable augmented reality framework) and an implementation of this architecture. Finally, we present several examples that show how the framework can form the basis of prototypical applications.  相似文献   

4.
5.
In this paper, we describe the design and the management of an agent-based system that supports distributed brainstorming activities. The support system is a highly coordinated IoT application composed of many locally installed interface devices, multimedia communication functions, and cloud functions that process application logic and store meeting data. The system is designed to support a variety of brainstorming sessions, so its functionalities must be modifiable and enable the system to be adapted to different environments and user requirements without any loss of performance. System accessibility should be also ensured from any location for any user. These constraints require a flexible and usable support system.We further discuss the aspects of flexibility and usability that are important in a support system for distributed brainstorming, from which we propose a conceptual schema for flexible and usable support systems. To realize this schema, we present a resource-oriented architecture that can modify the brainstorming support system’s structure and functions. Flexibility is achieved thanks to an agent-based system that manages resources and operates on them according to users’ requests.We also describe the system architecture, which is organized around a set of channels dedicated to different services proposed to the users. We present in detail a video channel that ensures user awareness during synchronized activities. We then conduct several experiments verifying the usability of important channels in the architecture and present the results of these experiments.Finally, we discuss experimental scenarios that show how the system owes its adaptability to management based on an agent organization that supports distributed brainstorming and other activities.  相似文献   

6.
As Third Generation (3G) networks emerge they provide not only higher data transmission rates but also the ability to transmit both voice and low latency data within the same session. This paper describes the architecture and implementation of a multimodal application (voice and text) that uses natural language understanding combined with a WAP browser to access email messages on a cell phone. We present results from the use of the system by users as part of a laboratory trial that evaluated usage. The user trial also compared the multimodal system with a text-only system that is representative of current products in the market today. We discuss the observed modality issues and highlight implementation problems and usability concerns that were encountered in the trial. Findings indicate that speech was used the majority of the time by participants for both input and navigation even though most of the participants had little or no prior experience with speech systems (yet did have prior experience with text-only access to applications on their phones). To our knowledge this represents the first implementation and evaluation of its kind using this combination of technologies on an unmodified cell phone. Design implications resulting from the study findings and usability issues encountered are presented to inform the design of future conversational multimodal mobile applications.  相似文献   

7.
Phishing is considered as one of the most serious threats for the Internet and e-commerce. Phishing attacks abuse trust with the help of deceptive e-mails, fraudulent web sites and malware. In order to prevent phishing attacks some organizations have implemented Internet browser toolbars for identifying deceptive activities. However, the levels of usability and user interfaces are varying. Some of the toolbars have obvious usability problems, which can affect the performance of these toolbars ultimately. For the sake of future improvement, usability evaluation is indispensable. We will discuss usability of five typical anti-phishing toolbars: built-in phishing prevention in the Internet Explorer 7.0, Google toolbar, Netcraft Anti-phishing toolbar and SpoofGuard. In addition, we included Internet Explorer plug-in we have developed, Anti-phishing IEPlug. Our hypothesis was that usability of anti-phishing toolbars, and as a consequence also security of the toolbars, could be improved. Indeed, according to the heuristic usability evaluation, a number of usability issues were found. In this article, we will describe the anti-phishing toolbars, we will discuss anti-phishing toolbar usability evaluation approach and we will present our findings. Finally, we will propose advices for improving usability of anti-phishing toolbars, including three key components of anti-phishing client side applications (main user interface, critical warnings and the help system). For example, we found that in the main user interface it is important to keep the user informed and organize settings accordingly to a proper usability design. In addition, all the critical warnings an anti-phishing toolbar shows should be well designed. Furthermore, we found that the help system should be built to assist users to learn about phishing prevention as well as how to identify fraud attempts by themselves. One result of our research is also a classification of anti-phishing toolbar applications. Linfeng Li is a student at the University of Tampere, Finland. Marko Helenius is Assistant Professor at the Department of Computer Sciences, University of Tampere, Finland.  相似文献   

8.
This paper presents a concept of adaptive development of user interfaces in multimodal web-based systems. Today, it is crucial for general access web-based systems that the user interface is properly designed and adjusted to user needs and capabilities. It is believed that adaptive interfaces could offer a possible solution to this problem. Here, we introduce the notion of the user profile for classification, the interface profile for describing the system interface, and the compound usability measure for evaluation of the interface. Consensus-based methods are applied for constructing the interface profiles appropriate to classes of users.  相似文献   

9.
The study provides an empirical analysis of long-term user behavioral changes and varying user strategies during cross-lingual interaction using the multimodal speech-to-speech (S2S) translation system of USC/SAIL. The goal is to inform user adaptive designs of such systems. A 4-week medical-scenario-based study provides the basis for our analysis. The data analyzed includes user interviews, post-session surveys, and the extensive system logs that were post-processed and annotated. The annotations measured the meaning transfer rates using human evaluations and a scale defined here called the concept matching score.First, qualitative data analysis investigates user strategies in dealing with errors, such as repeat, rephrase, change topic, start over, and the participants’ self-reported longitudinal adaptation to errors. Post-session surveys explore participant experience with the system and point to a trend of user-perceived increased performance over time.The log data analysis provides further insightful results. Users chose to allow some degradation (84% of original concepts) of their intended meaning to proceed through the system, even after they observed potential errors in the visual output from the speech recognizer. The rejected utterances, on average, had only 25% of the original concepts. This user-filtered outcome, after the complete channel transfer through the S2S system, is that 91% of the successful turns result in transfer of at least half the intended concepts while 90% of the user rejected turns would have conveyed less than half the intended meaning.The multimodal interface results in 24% relative improvement in the confirmation mode and in 31% relative improvement in the choice mode compared to the speech-only modality. Analysis also showed that users of the multimodal interface temporally change their strategies by accepting more system-produced choices. This user behavior can expedite communication seeking an operating balance between user strategies and system performance factors. Lastly, user utterance length is analyzed. Longer utterances in general imply more information delivered per utterance but potentially at the cost of increased processing degradation. The analysis demonstrates that users reduce their utterance length after unsuccessful turns and increase it after successful turns and that there is a learning effect that increases this behavior over the duration of the study.  相似文献   

10.
一个基于Web浏览器的多通道网上购物界面NetShop   总被引:2,自引:1,他引:1  
介绍了一个基于Web浏览器的多通道网上购物界面NetShop原型系统的系统结构及设计特点,这一系统是在对通用Web浏览器的多通道扩充的基础上,以网上购物为背景设计的多通道交互系统,系统通过采取基于上下文的查询,基于主通道的整合策略,语音反馈及补偿性输入等手段,为网上购物提供了一个自然的交互环境,在系统结构上采用了软插件技术,使得设计更为灵活,并为第三方的开发提供了一个开发性的接口。  相似文献   

11.
A shared interactive display (e.g., a tabletop) provides a large space for collaborative interactions. However, a public display lacks a private space for accessing sensitive information. On the other hand, a mobile device offers a private display and a variety of modalities for personal applications, but it is limited by a small screen. We have developed a framework that supports fluid and seamless interactions among a tabletop and multiple mobile devices. This framework can continuously track each user’s action (e.g., hand movements or gestures) on top of a tabletop and then automatically generate a unique personal interface on an associated mobile device. This type of inter-device interactions integrates a collaborative workspace (i.e., a tabletop) and a private area (i.e., a mobile device) with multimodal feedback. To support this interaction style, an event-driven architecture is applied to implement the framework on the Microsoft PixelSense tabletop. This framework hides the details of user tracking and inter-device communications. Thus, interface designers can focus on the development of domain-specific interactions by mapping user’s actions on a tabletop to a personal interface on his/her mobile device. The results from two different studies justify the usability of the proposed interaction.  相似文献   

12.
In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application's success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early '70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.  相似文献   

13.
While information visualization technologies have transformed our life and work, designing information visualization systems still faces challenges. Non-expert users or end-users need toolkits that allow for rapid design and prototyping, along with supporting unified data structures suitable for different data types (e.g., tree, network, temporal, and multi-dimensional data), various visualization, interaction tasks. To address these issues, we designed DaisyViz, a model-based user interface toolkit, which enables end-users to rapidly develop domain-specific information visualization applications without traditional programming. DaisyViz is based on a user interface model for information (UIMI), which includes three declarative models: data model, visualization model, and control model. In the development process, a user first constructs a UIMI with interactive visual tools. The results of the UIMI are then parsed to generate a prototype system automatically. In this paper, we discuss the concept of UIMI, describe the architecture of DaisyViz, and show how to use DaisyViz to build an information visualization system. We also present a usability study of DaisyViz we conducted. Our findings indicate DaisyViz is an effective toolkit to help end-users build interactive information visualization systems.  相似文献   

14.
Augmented reality (AR) is an Industry 4.0 technology. For more than a decade, advancements in AR technology and their applications have been expected to revolutionise the manufacturing industry and deliver quality and productivity gains. However, due to factors such as equipment costs, skills shortages and technological limitations of AR devices, operational deployment beyond prototypes has been constrained. Real-world, usability studies can explore barriers to implementation and improve system design. This paper details a mixed method usability case study of an AR head-mounted display (HMD) to perform a short, simple visual inspection task. Twenty-two participants from South Australian manufacturing businesses inspected a pump and pipe skid while working at height. Overall, workload demands for the task were considered acceptable and just below the “low” workload threshold (NASA Task Load Index, mean = 29.3) and the system usability was rated “average” (system usability scale, mean = 68.5). The results suggest the task did not place too high a burden on users and was an appropriate initial exposure to AR HMDs, but further refinement to the interface would be desirable before implementation to minimise frustration and promote learning. Users were enthusiastic and open-minded about the AR HMD although results indicate that even with recent advancements in AR HMD technology, interactions between the task, technology and environment continue to cause human and technical challenges—some of which are relatively straightforward to address but others are dependent on larger-scale efforts.  相似文献   

15.
Immersive authoring refers to the style of programming or developing content from within the targetexecutable environment. Immersive authoring is important for fields such as augmented reality (AR) in which interaction usability and user perception of the target content must be checked first hand, in situ. In addition, the interaction efficiency and usability of the authoring tools itself is equally important forease of authoring. In this paper, we propose design principles and describe an implementation of animmersive authoring system for AR. More importantly, we present a formal user study demonstrating its benefits and weaknesses. In particular, our results demonstrate that, compared to using the traditional 2D desktop development method, immersive authoring gained significant efficiency in specifying spatial arrangements and behavior tasks, a major component of AR content authoring. However, it was not so successful for abstract tasks such as logical programming. Based on this result, we suggest that a comprehensive AR authoring tool should include such immersive authoring functionality to help, particularly non-technical media artists, create effective contents based on the characteristics of the underlying media and interaction style.  相似文献   

16.
For decades, brain–computer interfaces (BCIs) have been used for restoring the communication and mobility of disabled people through applications such as spellers, web browsers, and wheelchair controls. In parallel to advances in computational intelligence and the production of consumer BCI products, BCIs have recently started to be considered as alternative modalities in human–computer interaction (HCI). One of the popular topics in HCI is multimodal interaction (MMI), which deals with combining multiple modalities in order to provide powerful, flexible, adaptable, and natural interfaces. This article discusses the situation of BCI as a modality within MMI research. State-of-the-art, real-time multimodal BCI applications are surveyed in order to demonstrate how BCI can be helpful as a modality in MMI. It is shown that multimodal use of BCIs can improve error handling, task performance, and user experience and that they can broaden the user spectrum. The techniques for employing BCI in MMI are described, and the experimental and technical challenges with some guidelines to overcome these are shown. Issues in input fusion, output fission, integration architectures, and data collection are covered.  相似文献   

17.
Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.  相似文献   

18.
This paper provides a review of research into using Augmented Reality (AR) and Mixed Reality(MR) for remote collaboration on physical tasks. AR/MR-based remote collaboration on physical tasks has recently become more prominent in academic research and engineering applications. It has great potential in many fields, such as real-time remote medical consultation, education, training, maintenance, remote assistance in engineering, and other remote collaborative tasks. However, to the best of our knowledge there has not been any comprehensive review of research in AR/MR remote collaboration on physical tasks. Therefore, this paper presents a comprehensive survey of research between 2000 and 2018 in this domain. We collected 215 papers, more than 80% of which were published between 2010 and 2018, and all relevant works are discussed at length. Then we elaborate on the review from typical architectures, applications (e.g., industry, telemedicine, architecture, teleducation and others), and empathic computing. Next, we made an in-depth review of the papers from seven aspects: (1) collection and classification research, (2) using 3D scene reconstruction environments and live panorama, (3) periodicals and conducting research, (4) local and remote user interfaces, (5) features of user interfaces commonly used, (6) architecture and sharing non-verbal cues, (7) applications and toolkits. We find that most papers (160 articles, 74.4%) are published in conferences, using co-located collaboration to emulate remote collaboration is adopted by more than half (126, 58.6%) of the reviewed papers, the shared non-verbal cues can be mainly classified into five types (Virtual Replicas or Physical Proxy(VRP), AR Annotations or a Cursor Pointer(ARACP), avatar, gesture, and gaze), the local/remote interface is mainly divided into four categories (Head-Mounted Displays(HMD), Spatial Augmented Reality(SAR), Windows-Icon-Menu-Pointer(WIMP) and Hand-Held Displays(HHD)). From this, we can draw ten conclusions. Following this we report on issues for future works. The paper also provides an overall academic roadmap and useful insight into the state-of-the-art of AR/MR remote collaboration on physical tasks. This work will be useful for current and future researchers who are interested in collaborative AR/MR systems.  相似文献   

19.
This study presents a user interface that was intentionally designed to support multimodal interaction by compensating for the weaknesses of speech compared with pen input and vice versa. The test application was email using a web pad with pen and speech input. In the case of pen input, information was represented as visual objects, which were easily accessible. Graphical metaphors were used to enable faster and easier manipulation of data. Speech input was facilitated by displaying the system speech vocabulary to the user. All commands and accessible fields with text labels could be spoken in by name. Commands and objects that the user could access via speech input were shown on a dynamic basis in a window. Multimodal interaction was further enhanced by creating a flexible object-action order such that the user could utter or select a command with a pen followed by the object which was to be enacted upon, or the other way round (e.g., New Message or Message New). The flexible action-object interaction design combined with voice and pen input led to eight possible action-object-modality combinations. The complexity of the multimodal interface was further reduced by making generic commands such as New applicable across corresponding objects. Use of generic commands led to a simplification of menu structures by reducing the number of instances in which actions appeared. In this manner, more content information could be made visible and consistently accessible via pen and speech input. Results of a controlled experiment indicated that the shortest task completion times for the eight possible input conditions were when speech-only was used to refer to an object followed by the action to be performed. Speech-only input with action-object order was also relatively fast. In the case of pen input-only, the shortest task completion times were found when an object was selected first followed by the action to be performed. In multimodal trials in which both pen and speech were used, no significant effect was found for object-action order, suggesting benefits of providing users with a flexible action-object interaction style in multimodal or speech-only systems.  相似文献   

20.
We describe a design approach, Tangible Augmented Reality, for developing face-to-face collaborative Augmented Reality (AR) interfaces. Tangible Augmented Reality combines Augmented Reality techniques with Tangible User Interface elements to create interfaces in which users can interact with spatial data as easily as real objects. Tangible AR interfaces remove the separation between the real and virtual worlds, and so enhance natural face-to-face communication. We present several examples of Tangible AR interfaces and results from a user study that compares communication in a collaborative AR interface to more traditional approaches. We find that in a collaborative AR interface people use behaviours that are more similar to unmediated face-to-face collaboration than in a projection screen interface.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号