首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
ObjectiveCreate a visual mobile end user development framework, named Puzzle, which allows end users without IT background to create, modify and execute applications, and provides support for interaction with smart things, phone functions and web services.MethodsDesign of an intuitive visual metaphor and associated interaction techniques for supporting end user development in mobile devices with iterative empirical validation.ResultsOur results show that the jigsaw is an intuitive metaphor for development in a mobile environment and our interaction techniques required a limited cognitive effort to use and learn the framework. Integration of different modalities and usage of smart things was relevant for users.ConclusionPuzzle has addressed the main objective. The framework further contributes to the research on mobile end user development in order to create an incentive for users to go beyond consuming content and applications to start creating their own applications.PracticeUsage of a mobile end user development environment has the potential to create a shift from the conventional few-to-many distribution model of software to a many-to-many distribution model. Users will be able to create applications that fit their requirements and share their achievements with peers.ImplicationsThis study has indicated that the Puzzle visual environment has the potential to enable users to easily create applications and stimulate exploration of innovative scenarios through smartphones.  相似文献   

2.
The TV-Anytime standard describes the structures of categories of digital TV program metadata, as well as user profile metadata for TV programs. We describe a natural language (NL) model for the users to interact with the TV-Anytime metadata and preview TV programs from their mobile devices. The language utilises completely the TV-Anytime metadata specifications (upper ontologies), as well as domain-specific ontologies. The interaction model does not use clarification dialogues, but it uses the user profiles as well as TV-Anytime metadata information and ontologies to rank the possible responses in case of ambiguities. We describe implementations of the model that run on a PDA and on a mobile phone, and manage the metadata on a remote TV-Anytime-compatible TV set. We present user evaluations of the approach. Finally, we propose a generalised implementation framework that can be used to easily provide NL interfaces for mobile devices for different applications and ontologies.  相似文献   

3.
The increasing complexity of applications on handheld devices requires the development of rich new interaction methods specifically designed for resource-limited mobile use contexts. One appealingly convenient approach to this problem is to use device motions as input, a paradigm in which the currently dominant interaction metaphors are gesture recognition and visually mediated scrolling. However, neither is ideal. The former suffers from fundamental problems in the learning and communication of gestural patterns, while the latter requires continual visual monitoring of the mobile device, a task that is undesirable in many mobile contexts and also inherently in conflict with the act of moving a device to control it. This paper proposes an alternate approach: a gestural menu technique inspired by marking menus and designed specifically for the characteristics of motion input. It uses rotations between targets occupying large portions of angular space and emphasizes kinesthetic, eyes-free interaction. Three evaluations are presented, two featuring an abstract user interface (UI) and focusing on how user performance changes when the basic system parameters of number, size and depth of targets are manipulated. These studies show that a version of the menu system containing 19 commands yields optimal performance, compares well against data from the previous literature and can be used effectively eyes free (without graphical feedback). The final study uses a full graphical UI and untrained users to demonstrate that the system can be rapidly learnt. Together, these three studies rigorously validate the system design and suggest promising new directions for handheld motion-based UIs.  相似文献   

4.
As more interactive surfaces enter public life, casual interactions from passersby are bound to increase. Most of these users can be expected to carry a mobile phone or PDA, which nowadays offers significant computing capabilities of its own. This offers new possibilities for interaction between these users’ private displays and large public ones. In this paper, we present a system that supports such casual interactions. We first explore a method to track mobile phones that are placed on a horizontal interactive surface by examining the shadows which are cast on the surface. This approach detects the presence of a mobile device, as opposed to any other opaque object, through the signal strength emitted by the built-in Bluetooth transceiver without requiring any modifications to the devices’ software or hardware. We then go on to investigate interaction between a Sudoku game running in parallel on the public display and on mobile devices carried by passing users. Mobile users can join a running game by placing their devices on a designated area. The only requirement is that the device is in discoverable Bluetooth mode. After a specific device has been recognized, a client software is sent to the device which then enables the user to interact with the running game. Finally, we explore the results of a study which we conducted to determine the effectiveness and intrusiveness of interactions between users on the tabletop and users with mobile devices.  相似文献   

5.
Coupling mobile devices and other remote interaction technology with software systems surrounding the user enables for building interactive environments under explicit user control. The realization of explicit interaction in ubiquitous or pervasive computing environments introduces a physical distribution of input devices, and technology embedded into the environment of the user. To fulfill the requirements of emerging trends in mobile interaction, common approaches for system design need adaptations and extensions. This paper presents the adaptation and extension of the Model-View-Controller approach to design applications of remote, complementary, duplicated and detached user interface elements.  相似文献   

6.
The severe resource restrictions of computer-augmented everyday artifacts imply substantial problems for the design of applications in smart environments. Some of these problems can be overcome by exploiting the resources, I/O interfaces, and computing capabilities of nearby mobile devices in an ad-hoc fashion. We identify the means by which smart objects can make use of handheld devices such as PDAs and mobile phones, and derive the following major roles of handhelds in smart environments: (1) mobile infrastructure access point; (2) user interface; (3) remote sensor; (4) mobile storage medium; (5) remote resource provider; and (6) weak user identifier. We present concrete applications that illustrate these roles, and describe how handhelds can serve as mobile mediators between computer-augmented everyday artifacts, their users, and background infrastructure services. The presented applications include a remote interaction scenario, a smart medicine cabinet, and an inventory monitoring application.  相似文献   

7.
ABSTRACT

This article identifies, catalogues, and discusses factors that are responsible for causing visual impairment of either a pathological or situational nature for touch and gesture input on smart mobile devices. Because the vast majority of interactions with touchscreen devices are highly visual in nature, any factor that prevents a clear, direct view of the mobile device’s screen can have potential negative implications on the effectiveness and efficiency of the interaction. This work presents the first overview of such factors, which are grouped in a catalogue of users, devices, and environments. The elements of the catalogue (e.g., psychological factors that relate to the user, or the social acceptability of mobile device use in public that relates to the social environment) are discussed in the context of current eye pathology classification from medicine and the recent literature in human–computer interaction on mobile touch and gesture input for people with visual impairments, for which a state-of-the-art survey is conducted. The goal of this work is to help systematize research on visual impairments and mobile touchscreen interaction by providing a catalogue-based view of the main causes of visual impairments affecting touch and gesture input on smart mobile devices.  相似文献   

8.
As the use of mobile touch devices continues to increase, distinctive user experiences can be provided through a direct manipulation. Therefore, the characteristics of touch interfaces should be considered regarding their controllability. This study aims to provide a design approach for touch-based user interfaces. A derivation procedure for the touchable area is proposed as a design guideline based on input behavior. To these ends, two empirical tests were conducted through a smart phone interface. Fifty-five participants were asked to perform a series of input tasks on a screen. As results, touchable area with a desirable hit rate of 90% could be yielded depending on the icon design. To improve the applicability of the touchable area, user error was analyzed based on omission-commission classification. The most suitable design had a hit rate of 95% compared to 90 and 99%. This study contributes practical implications for user interaction design with finger-based controls.Relevance to industryThis research describes a distinctive design approach that guarantees the desired touch accuracy for effective use of mobile touch devices. Therefore, the results will encourage interface designers to take into account the input behavior of fingers from a user-centered perspective.  相似文献   

9.
Accessibility revolves around building products, including electronic devices and digital content, so that diverse users can conveniently utilize them, irrespective of their capabilities. In recent years, the concept of touchscreen accessibility has gained a remarkable attention, especially within the considerable reliance on mobile touchscreen devices (MTDs) for information acquisition and dissemination as we witness nowadays. For users who are visually impaired, MTDs unlock different opportunities for independence and functioning. Thus, with the increasing ubiquity of MTDs and their potential extensive utility for all demographics, it becomes paramount to ensure that these devices and the content delivered on them are accessible. And while it might seem straightforward to achieve accessibility on MTDs, attaining this outcome is governed by an interplay between different elements. These involve platform (i.e., operating system) built-in support of accessibility features, content rendering modalities and structures pertaining to user needs and the peculiarities of MTDs as informed in standard accessibility guidelines, user studies uncovering preferences and best practices while interacting with MTDs, national legislations and policies, and the use of third-party devices such as assistive technologies. In this paper, mobile touchscreen accessibility for users who are visually impaired is surveyed with focus on three aspects: (1) the existing built-in accessibility features within popular mobile platforms; (2) the nature of non-visual interaction and how users who are visually impaired access, navigate, and create content on MTDs; and (3) the studies that tackled different issues pertaining to touchscreen accessibility, such as extraction of user needs and interaction preferences, identification of most critical accessibility problems encountered on MTDs, integrating mobile accessibility in standard accessibility guidelines, and investigation of existing guidelines in terms of sufficiency and appropriateness.  相似文献   

10.
The prevalent visions of ambient intelligence emphasise natural interaction between user and functions and services embedded in the environment or available through mobile devices. In these scenarios the physical and virtual worlds seamlessly gear into each other, making crossing the border between these worlds natural or even invisible to the user. The bottleneck in reaching these scenarios appear in the natural mapping between the physical objects and their virtual counterparts. The emergence of local connectivity in mobile devices opens possibilities for implementing novel user interface paradigms to enhance this mapping. We present physical selection paradigm for implementing an intuitive human technology interaction for mobile devices. In order to demonstrate the feasibility of the paradigm we implemented two experimental set-ups using commercially available smart phones with IrDA connectivity. The experiments involved selecting a website by physically pointing at its symbol and making a phone call by pointing at an icon representing the person to be called. In tentative user experiments the physical selection method was more time-efficient and it was perceived more positively by the users than a conventional method.
Heikki AilistoEmail:
  相似文献   

11.
A natural language interface (NLI) enables the ease-of-use of information systems in performing sophisticated human - computer interaction. To address the challenges of mobile devices to user interaction in information management, we propose an NLI as a promising solution. In this paper, we review state-of-the-art NLI technologies and analyse user requirements for managing notable information on mobile devices. To minimize any technical difficulties arising from developing and improving the usability of NLI systems we develop general principles for NLI design, which fills in a gap in the literature. In order to satisfy user requirements for information management on mobile devices, we innovatively design NLI-enabled information management architecture. It is shown from two usage scenarios that the architecture could lead to reduced effort in user navigation and improved efficiency and effectiveness of managing information on mobile devices. We conclude the article with the implications of this study and suggestions for future direction.  相似文献   

12.
The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection.  相似文献   

13.
In the richly networked world of the near future, mobile computing users will be confronted with an ever-expanding array of devices and services accessible in their environments. In such a world, we cannot expect to have available to us specific applications that allow us to accomplish every conceivable combination of devices that we may wish. Instead, we believe that many of our interactions with the network will be characterized by the use of “general purpose” tools that allow us to discover, use, and integrate multiple devices around us. This paper lays out the case for why we believe that so-called “serendipitous integration” is a necessary fact that we will face in mobile computing, and explores a number of design experiments into supporting end user configuration and control of networked environments through general purpose tools. We present an iterative design approach to creating such tools and their user interfaces, discuss our observations about the challenges of designing for such a world, and then explore a number of tools that take differing design approaches to overcoming these challenges. We conclude with a set of reflections on the user experience issues that we believe are inherent in dealing with ad hoc mobile computing in richly networked environments.  相似文献   

14.
The market for devices like mobile phones, multifunctional watches, and personal digital assistants is growing rapidly. Most of these mobile user devices need security for their prospective electronic commerce applications. While new technology has simplified many business and personal transactions, it has also opened the door to high-tech crime. We investigate design options for mobile user devices that are used in legally significant applications  相似文献   

15.
In the transition from a device-oriented paradigm toward a more task-oriented paradigm with increased interoperability, people are struggling with inappropriate user interfaces, competing standards, technical incompatibilities, and other difficulties. The current handles for users to explore, make, and break connections between devices seem to disappear in overly complex menu structures displayed on small screens. This paper tackles the problem of establishing connections between devices in a smart home environment, by introducing an interaction model that we call semantic connections. Two prototypes are demonstrated that introduce both a tangible and an augmented reality approach toward exploring, making, and breaking connections. In the augmented reality approach, connections between real-world objects are visualized by displaying visible lines and icons from a mobile device containing a pico projector. In the tangible approach, objects in the environment are tagged and can be scanned and interacted with, to explore connection possibilities, and manipulate the connections. We discuss the technical implementation of a pilot study setup used to evaluate both our interaction approaches. We conclude the paper with the results of a user study that shows how the interaction approaches influence the mental models users construct after interacting with our setup.  相似文献   

16.
In this article, we present a practical approach to analyzing mobile usage environments. We propose a framework for analyzing the restrictions that characteristics of different environments pose on the user's capabilities. These restrictions along with current user interfaces form the cost of interaction in a certain environment. Our framework aims to illustrate that cost and what causes it. The framework presents a way to map features of the environment to the effects they cause on the resources of the user and in some cases on the mobile device. This information can be used for guiding the design of adaptive and/or multimodal user interfaces or devices optimized for certain usage environments. An example of using the framework is presented along with some major findings and three examples of applying them in user interface design.  相似文献   

17.
High user interaction capability of mobile devices can help improve the accuracy of mobile visual search systems. At query time, it is possible to capture multiple views of an object from different viewing angles and at different scales with the mobile device camera to obtain richer information about the object compared to a single view and hence return more accurate results. Motivated by this, we propose a new multi-view visual query model on multi-view object image databases for mobile visual search. Multi-view images of objects acquired by the mobile clients are processed and local features are sent to a server, which combines the query image representations with early/late fusion methods and returns the query results. We performed a comprehensive analysis of early and late fusion approaches using various similarity functions, on an existing single view and a new multi-view object image database. The experimental results show that multi-view search provides significantly better retrieval accuracy compared to traditional single view search.  相似文献   

18.
Mobile computing systems usually express a user movement trajectory as a sequence of areas that capture the user movement trace. Given a set of user movement trajectories, user movement patterns refer to the sequences of areas through which a user frequently travels. In an attempt to obtain user movement patterns for mobile applications, prior studies explore the problem of mining user movement patterns from the movement logs of mobile users. These movement logs generate a data record whenever a mobile user crosses base station coverage areas. However, this type of movement log does not exist in the system and thus generates extra overheads. By exploiting an existing log, namely, call detail records, this article proposes a Regression-based approach for mining User Movement Patterns (abbreviated as RUMP). This approach views call detail records as random sample trajectory data, and thus, user movement patterns are represented as movement functions in this article. We propose algorithm LS (standing for Large Sequence) to extract the call detail records that capture frequent user movement behaviors. By exploring the spatio-temporal locality of continuous movements (i.e., a mobile user is likely to be in nearby areas if the time interval between consecutive calls is small), we develop algorithm TC (standing for Time Clustering) to cluster call detail records. Then, by utilizing regression analysis, we develop algorithm MF (standing for Movement Function) to derive movement functions. Experimental studies involving both synthetic and real datasets show that RUMP is able to derive user movement functions close to the frequent movement behaviors of mobile users.  相似文献   

19.
Data visualizations have been widely used on mobile devices like smartphones for various tasks (e.g., visualizing personal health and financial data), making it convenient for people to view such data anytime and anywhere. However, others nearby can also easily peek at the visualizations, resulting in personal data disclosure. In this paper, we propose a perception-driven approach to transform mobile data visualizations into privacy-preserving ones. Specifically, based on human visual perception, we develop a masking scheme to adjust the spatial frequency and luminance contrast of colored visualizations. The resulting visualization retains its original information in close proximity but reduces visibility when viewed from a certain distance or farther away. We conducted two user studies to inform the design of our approach (N=16) and systematically evaluate its performance (N=18), respectively. The results demonstrate the effectiveness of our approach in terms of privacy preservation for mobile data visualizations.  相似文献   

20.
This paper describes concepts, design, implementation, and performance evaluation of a 3D-based user interface for accessing IoT-based Smart Environments (IoT-SE). The generic interaction model of the described work addresses some major challenges of Human-IoT-SE-Interaction such as cognitive overload associated with manual device selection in complex IoT-SE, loss of user control, missing system image or over-automation. To address these challenges we propose a 3D-based mobile interface for mixed-initiative interaction in IoT-SE. The 3D visualization and 3D UI, acting as the central feature of the system, create a logical link between physical devices and their virtual representation on the end user’s mobile devices. By so doing, the user can easily identify a device within the environment based on its position, orientation, and form, and access the identified devices through the 3D interface for direct manipulation within the scene. This overcomes the problem of manual device selection. In addition, the 3D visualization provides a system image for the IoT-SE, which supports users in understanding the ambience and things going on in it. Furthermore, the mobile interface allows users to control the amount and the way the IoT-SE automates the environment. For example, users can stop or postpone system triggered automatic actions, if they don’t like or want them. Users also can remove a rule forever. By so doing, users can delete smart behaviors of their IoT-SE. This helps to overcome the automation challenges. In this paper, we present the design, implementation and evaluation of the proposed interaction system. We chose smart meeting rooms as the context for prototyping and evaluating our interaction concepts. However, the presented concepts and methods are generic and could be adapted to similar environments such as smart homes. We conducted a subjective usability evaluation (ISO-Norm 9241/110) with 16 users. All in one the study results indicate that the proposed 3D-User Interface achieved a good high score according to the ISO-Norm scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号