首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Existing e-commerce applications on the web provide the users a relatively simple, browser-based interface to access available products. Customers are not provided with the same shopping experience as they would in an actual store or mall. This experience, however, can be achieved by the creation of a virtual shopping mall environment, simulating most of the actual shopping user interactions. The virtual mall brings together the services and products of various vendors. Users can navigate through the virtual shopping malls, adding items of interest into a virtual shopping cart, and perform searches assisted by ‘intelligent agents’. A Collaborative Virtual Environment (CVE) can realize a sophisticated virtual shopping application. In such a CVE, a large number of potential users may interact with each other. In this paper, vCOM, a VRML and Java3D-based prototype is presented, which permits users to navigate around virtual e-commerce worlds and perform collaborative shopping and intelligent searches with the assistance of software agents, in order to find the products and services of interest to them. They can then negotiate with sales agents to bargain for the best possible price and then make a secure buying transaction. The virtual mall prototype also allows the user to communicate with an ‘intelligent shopping assistant’ using simple voice commands. This assistant interacts with the shopper using voice synthesis and helps him/her use the interface to efficiently navigate in the mall. Real-time interactions between the entities in this shared environment are implemented over the High Level Architecture (HLA), an IEEE standard for distributed simulations and modeling. The paper focuses on user interface design, collaborative e-commerce through HLA and multi-agent architecture.  相似文献   

2.
ABSTRACT

In the imminent future, people are likely to engage with smart devices by instructing them in natural language. A fundamental question to ask is how might intelligent agents interpret such instructions and learn new tasks. In this article we present the first speech-based virtual assistant that can be taught new commands by speech. A user study on our agent has shown that people can teach it new commands. We also show that people see great advantage in using an instructable agent, and determine what users believe are the most important use cases of such an agent.  相似文献   

3.
In the few past decades, several international researchers have worked to develop intelligent wheelchairs for the people with reduced mobility. For many of these projects, the structured set of commands is based on a sensor-based command. Many types of commands are available but the final decision is to be made by the user. A former work established a behaviour-based multi-agent form of control ensuring that the user selects the best option for him/her in relation to his/her preferences or requirements. This type of command aims at “merging” this user and his/her machine—a kind of symbiotic relationship making the machine more amenable and the command more effective. In this contribution, the approach is based on a curve matching procedure to provide comprehensive assistance to the user. This new agent, using a modelization of the paths that are most frequently used, assists the user during navigation by proposing the direction to be taken when the path has been recognized. This approach will spare the user the effort of determining a new direction—which might be a major benefit in the case of severe disabilities. The approach considered uses particle filtering to implement the recognition of the most frequent paths according to a topological map of the environment.  相似文献   

4.
Technological advancements, including advancements in the medical field have drastically improved our quality of life, thus pushing life expectancy increasingly higher. This has also had the effect of increasing the number of elderly population. More than ever, health-care institutions must now care for a large number of elderly patients, which is one of the contributing factors in the rising health-care costs. Rising costs have prompted hospitals and other health-care institutions to seek various cost-cutting measures in order to remain competitive. One avenue being explored lies in the technological advancements that can make hospital working environments much more efficient. Various communication technologies, mobile computing devices, micro-embedded devices and sensors have the ability to support medical staff efficiency and improve health-care systems. In particular, one promising application of these technologies is towards deducing medical staff activities. Having this continuous knowledge about health-care staff activities can provide medical staff with crucial information of particular patients, interconnect with other supporting applications in a seamless manner (e.g. a doctor diagnosing a patient can automatically be sent the patient's lab report from the pathologist), a clear picture of the time utilisation of doctors and nurses and also enable remote virtual collaboration between activities, thus creating a strong base for establishment of an efficient collaborative environment. In this paper, we describe our activity recognition system that in conjunction with our efficiency mechanism has the potential to cut down health-care costs by making the working environments more efficient. Initially, we outline the activity recognition process that has the ability to infer user activities based on the self-organisation of surrounding objects that user may manipulate. We then use the activity recognition information to enhance virtual collaboration in order to improve overall efficiency of tasks within a hospital environment. We have analysed a number of medical staff activities to guide our simulation setup. Our results show an accurate activity recognition process for individual users with respect to their behaviour. At the same time we support remote virtual collaboration through tasks allocation process between doctors and nurses with results showing maximum efficiency within the resource constraints.  相似文献   

5.
Proactive Fuzzy Control and Adaptation Methods for Smart Homes   总被引:1,自引:0,他引:1  
Proactive, context-aware computing isn't new. In 2000, David Tennenhouse called for a change in the boundary between the physical and virtual worlds. He identified proactive computing as an alternative to interactive computing and defined how future systems should become more involved with the real world. He also considered context-aware control systems with online adaptation especially promising. Today, ambient-intelligence researchers show increasing interest in both proactive applications and context-aware applications. Using different context-recognition methods, researchers can easily gather application-specific information from the environment and enable context-triggered actions. According to Hee Eon Byun and Keith Cheverst, a context-aware home can serve its inhabitants more flexibly and adaptively than an ordinary home. They also claim that proactive systems can be built using machine-learning algorithms with context recognition. In addition to context recognition, adaptivity is essential in intelligent environments. In terms of computing systems, the environment can adjust itself using adaptation mechanisms to comply with user preferences; it can become unobtrusive and better support user activities. A home adapting to its inhabitants' living style is much more convenient than a user adapting to the home's behavior. In this article, we show how to develop a proactive, adaptive, fuzzy home-control system, present the algorithm we used for adaptation, and evaluate the test results we obtained.  相似文献   

6.
This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera- motions, and dynamic luminance. All these effects are computed based on the simulated gaze of the user, and are meant to improve user's sensations in future virtual reality applications.  相似文献   

7.
In this paper, a system based on several intelligent techniques, including speech recognition, natural language processing and linear planning is described. These techniques have been employed to generate a sequence of operations understandable by the control system of a robot that is to perform a semi-automatic surgical task.Thus, a system has been implemented that translates some surgeon's ‘natural’ language into robot-executable commands. A robotic simulator has then been implemented in order to test the planned sequence in a virtual environment.  相似文献   

8.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

9.
When interacting in a virtual environment, users are confronted with a number of interaction techniques. These interaction techniques may complement each other, but in some circumstances can be used interchangeably. Because of this situation, it is difficult for the user to determine which interaction technique to use. Furthermore, the use of multimodal feedback, such as haptics and sound, has proven beneficial for some, but not all, users. This complicates the development of such a virtual environment, as designers are not sure about the implications of the addition of interaction techniques and multimodal feedback. A promising approach for solving this problem lies in the use of adaptation and personalization. By incorporating knowledge of a user’s preferences and habits, the user interface should adapt to the current context of use. This could mean that only a subset of all possible interaction techniques is presented to the user. Alternatively, the interaction techniques themselves could be adapted, e.g. by changing the sensitivity or the nature of the feedback. In this paper, we propose a conceptual framework for realizing adaptive personalized interaction in virtual environments. We also discuss how to establish, verify and apply a user model, which forms the first and important step in implementing the proposed conceptual framework. This study results in general and individual user models, which are then verified to benefit users interacting in virtual environments. Furthermore, we conduct an investigation to examine how users react to a specific type of adaptation in virtual environments (i.e. switching between interaction techniques). When an adaptation is integrated in a virtual environment, users positively respond to this adaptation as their performance significantly improve and their level of frustration decrease.  相似文献   

10.

Research into virtual environments on the one hand and artificial intelligence and artificial life on the other has largely been carried out by two different groups of people with different preoccupation and interests, but some convergence is now apparent between the two fields. Applications in which activity independent of the user takes place- involving crowds or other agents- are beginning to be tackled, while synthetic agents, virtual humans, and computer pets are all areas in which techniques from the two fields require strong integration. The two communities have much to learn from each other if wheels are not to be reinvented on both sides. This paper reviews the issues arising from combining artificial intelligence and artificial life techniques with those of virtual environments to produce just such intelligent virtual environments. The discussion is illustrated with examples that include environments providing knowledge to direct or assist the user rather than relying entirely on the user's knowledge and skills, those in which the user is represented by a partially autonomous avatar, those containing intelligent agents separate from the user, and many others from both sides of the area.  相似文献   

11.
ABSTRACT

One of the challenges of teleoperation is the recognition of a user’s intended commands, particularly in the manning of highly dynamic systems such as drones. In this paper, we present a solution to this problem by developing a generalized scheme relying on a Convolutional Neural Network (CNN) that is trained to recognize a user’s intended commands, directed through a haptic device. Our proposed method allows the interface to be personalized for each user, by pre-training the CNN differently according to the input data that is specific to the intended end user. Experiments were conducted using two haptic devices and classification results demonstrate that the proposed system outperforms geometric-based approaches by nearly 12%. Furthermore, our system also lends itself to other human–machine interfaces where intention recognition is required.  相似文献   

12.
One of the challenges which must be faced in the field of the information processing is the need to cope with huge amounts of data. There exist many different environments in which large quantities of information are produced. For example, in a command-line interface, a computer user types thousands of commands which can hide information about the behavior of her/his. However, processing this kind of streaming data on-line is a hard problem.This paper addresses the problem of the classification of streaming data from a dimensionality reduction perspective. We propose to learn a lower dimensionality input model which best represents the data and improves the prediction performance versus standard techniques. The proposed method uses maximum dependence criteria as distance measurement and finds the transformation which best represents the command-line user. We also make a comparison between the dimensionality reduction approach and using the full dataset. The results obtained give some deeper understanding in advantages and drawbacks of using both perspectives in this user classifying environment.  相似文献   

13.
本课题研究的目的和意义在于设计基于zigbee技术和ARM技术的智能家居系统,即通过CC2530模块组建家庭内部局域网,通过这些模块对家用电器进行控制,使用ARM网关将家电信息传送给服务器,同时也将控制命令转发给zigbee局域网,用户只需在任何一个能上网的地方或者通过手机,登陆家中的服务器,发送控制命令及查询命令,即...  相似文献   

14.
Haptic feedback is an important component of immersive virtual reality (VR) applications that is often suggested to complement visual information through the sense of touch. This paper investigates the use of a haptic vest in navigation tasks. The haptic vest produces a repulsive vibrotactile feedback from nearby static virtual obstacles that augments the user spatial awareness. The tasks require the user to perform complex movements in a 3D cluttered virtual environment, like avoiding obstacles while walking backwards and pulling a virtual object. The experimental setup consists of a room-scale environment. Our approach is the first study where a haptic vest is tracked in real time using a motion capture device so that proximity-based haptic feedback can be conveyed according to the actual movement of the upper body of the user.User study experiments have been conducted with and without haptic feedback in virtual environments involving both normal and limited visibility conditions. A quantitative evaluation was carried out by measuring task completion time and error (collision) rate. Multiple haptic rendering techniques have also been tested. Results show that under limited visibility conditions proximity-based haptic feedback generated by a wearable haptic vest can significantly reduce the number of collisions with obstacles in the virtual environment.  相似文献   

15.
Ambient intelligence research is about ubiquitous computing and about social and intelligent properties of computer-supported environments. These properties aim at providing inhabitants or visitors of ambient intelligence environments with support in their activities. Activities include interactions between inhabitants and between inhabitants and (semi-) autonomous agents, including mobile robots, virtual humans and other smart objects in the environment. Providing real-time support requires understanding of behavior and activities. Clearly, being able to provide real-time support also allows us to provide off-line support, that is, intelligent off-line retrieval, summarizing, browsing and even replay, possibly in a transformed way, of stored information. Real-time remote access to these computer-supported environments also allows participation in activities and such participation as well can profit from the real-time capturing and interpretation of behavior and activities performed and supported by ambient intelligence technology. In this paper, we illustrate and support these observations by looking at results obtained in several European and US projects, in particular projects on smart environments, whether they are smart meetings or lecture rooms, smart offices or intelligently monitored events in public spaces. In particular, we look at the augmented multi-party interaction (AMI) project in which we are involved and we try to sketch a framework in which we can transform research results from the meeting context to the home environment context.  相似文献   

16.
Nowadays, systems are growing in power and in access to more resources and services. This situation makes it necessary to provide user-centered systems that act as intelligent assistants. These systems should be able to interact in a natural way with human users and the environment and also be able to take into account user goals and environment information and changes. In this paper, we present an architecture for the design and development of a goal-oriented, self-adaptive, smart-home environment. With this architecture, users are able to interact with the system by expressing their goals which are translated into a set of agent actions in a way that is transparent to the user. This is especially appropriate for environments where ambient intelligence and automatic control are integrated for the user’s welfare. In order to validate this proposal, we designed a prototype based on the proposed architecture for smart-home scenarios. We also performed a set of experiments that shows how the proposed architecture for human-agent interaction increases the number and quality of user goals achieved.  相似文献   

17.
虚拟环境的系统设计方法及计算模型研究   总被引:11,自引:0,他引:11  
面向对象技术和面向Agent技术是虚环境系统的基本设计方法。本文用面向对象的方法来构造Agent,并提供一组支撑Agent的底层计算模型,如神经网络,遗传算法,专家系统和规划管理等。  相似文献   

18.
基于单片机模块、LD3320语音识别模块和双自由度云台模块,设计了一种具备非特定人语音识别能力的智能语音控制系统——智能地球仪。该地球仪通过智能识别用户给出任一国家名称的命令驱动云台转动使地球仪上的目标国家正对用户,同时点亮代表该国家首都的LED灯,并播放该国家的概况信息。此地球仪可应用于地理教学中,是一种性能可靠、功能强大、趣味性强的教学模具。  相似文献   

19.
In this article, the idea of multipurpose fuzzy semantic enhanced 3D virtual reality simulator for the evaluation of maritime robot algorithms, and for the analysis of maritime missions is presented. The simulator uses the digital mockup technology in blending with semantic domain knowledge of the system, to analyze the tasks remotely. The 3D virtual reality (VR) is proposed to help the operators provide the detailed information and guidance during real-time tele-operations, and the incorporation of fuzzy semantic knowledge makes the virtual environment intelligent and automatic. The integration of fuzzy scene independent ontology with the virtual environment (VE) engenders a knowledge driven inter operable virtual environment which eases the user in natural language querying, personalization, interpretations and manipulation. The distinctive semantic VR scene builder utility of the proposed system draws the VR environment automatically while getting the high level specification of the system for the client. The proposed simulator can be effectively used for real-time robots trainings and for the evaluation of AI based algorithms designed for intelligent vessels and AUV’s without knowing the complex underlying VR scene building technologies. Furthermore, it provides the benefits to optimize the pre-process physical environment operations to mimic the real world into a virtual environment. The remote operations and feasibility analysis performed on virtual simulator can efficiently save the cost, time and claims to provide the operators a preprocessor information and guide.  相似文献   

20.

Most of today's virtual environments are populated with some kind of autonomous life - like agents . Such agents follow a preprogrammed sequence of behaviors that excludes the user as a participating entity in the virtual society . In order to make inhabited virtual reality an attractive place for information exchange and social interaction , we need to equip the autonomous agents with some perception and interpretation skills . In this paper we present one skill: human action recognition . By opposition to human - computer interfaces that focus on speech or hand gestures , we propose a full - body integration of the user . We present a model of human actions along with a real - time recognition system . To cover the bilateral aspect in human - computer interfaces , we also discuss some action response issues . In particular , we describe a motion management library that solves animation continuity and mixing problems . Finally , we illustrate our systemwith two examples and discuss what we have learned .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号