首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The validation of a product interface is often a critical issue in the design process. Virtual reality and mixed reality (MR) are able to enhance the interactive simulation of the product human–machine interface (HMI), as these technologies allow engineers to directly involve end users in the usability assessment. This paper describes a MR environment specifically addressed to the usability evaluation of a product interface, which allows the simulation of the HMI behaviour using the same models and the same software employed by engineers during the design phase. Our approach is based on the run-time connection between the visualisation software and the simulators used for product design and analysis. In particular, we use Matlab/Simulink to model and simulate the product behaviour, and Virtools to create the interactive MR environment in which the end user can test the product. Thanks to this architecture, any modification done on the behaviour models is immediately testable in MR.  相似文献   

2.
This paper deals with fundamental issues of human-robot cooperation in precise positioning of a flat object on a target. Based on the analysis of human-human interaction, two cooperation schemes are introduced. Several algorithms implementing these schemes are developed. A general theoretical framework for human/robot cooperation has been developed to represent these algorithms. The evaluation of the algorithms was carried out using our in-house made robot prototype and experiments by human subjects has demonstrated the effectiveness of our schemes. The main problem was the regulation of the robot-human interaction. Since the robot has no range sensors, it has to rely on the force and displacement information resulting from the interaction with human to understand human intention. The way the robot interprets these signal is crucial for smooth interaction. To be able to carry out a concrete task a simplification was made, in which robot and human do not directly hold the object but a frame to which the object and various sensors are attached.  相似文献   

3.
The disassembly process is the main step of dealing with End-Of-Life (EOL) products. This process is carried out mostly manually so far. Manual disassembly is not efficient economically and the robotic systems are not reliable in dealing with complex disassembly operations as they have high-level uncertainty. In this research, a disassembly planning method based on human-robot collaboration is proposed. This method employs the flexibility and ability of humans to deal with complex tasks, alongside the repeatability and accuracy of the robot. Besides, to increase the efficiency of the process the components are targeted based on the remanufacturability parameters. First, human-robot collaboration tasks are classified, and using evaluation of components remanufacturability parameters, human-robot collaboration definition and characteristics are defined. To target the right components based on their remanufacturability factors, the PROMETHEE II method is employed to select the components based on Cleanability, Reparability, and Economy. Then, the disassembly process is represented using AND/OR representation and the mathematical model of the process is defined. New optimization parameters for human-robot collaboration are defined and the genetic algorithm was modified to find a near-optimal solution based on the defined parameters. To validate the task classification and allocation, a 6-DOF TECHMAN robot arm is used to test the peg-out-hole disassembly operation as a common disassembly task. The experiments confirm the task classification and allocation method. Finally, an automotive component was selected as a case study to validate the efficiency of the proposed method. The results in comparison with the Particle Swarm algorithm prove the efficiency and reliability of the method. This method produces a higher quality solution for the human-robot collaborative disassembly process.  相似文献   

4.
基于混合现实交互界面的下颌运动仿真系统   总被引:1,自引:1,他引:0       下载免费PDF全文
针对传统医学可视化仿真系统中交互界面和交互技术的缺点,基于视觉跟踪技术采用实物模型设计和实现了一个基于混合现实界面的下颌运动仿真系统。通过对物理界面元素、虚拟界面元素以及交互隐喻的分析,为混合现实环境下的交互技术建立了一个具有通用性的分类体系。物理与虚拟的界面元素之间通过语义映射将用户的操作从物理世界映射到虚拟空间中,从而为用户提供一种更为直观、自然的交互方式。  相似文献   

5.
Interactive robot doing collaborative work in hybrid work cell need adaptive trajectory planning strategy. Indeed, systems must be able to generate their own trajectories without colliding with dynamic obstacles like humans and assembly components moving inside the robot workspace. The aim of this paper is to improve collision-free motion planning in dynamic environment in order to insure human safety during collaborative tasks such as sharing production activities between human and robot. Our system proposes a trajectory generating method for an industrial manipulator in a shared workspace. A neural network using a supervised learning is applied to create the waypoints required for dynamic obstacles avoidance. These points are linked with a quintic polynomial function for smooth motion which is optimized using least-square to compute an optimal trajectory. Moreover, the evaluation of human motion forms has been taken into consideration in the proposed strategy. According to the results, the proposed approach is an effective solution for trajectories generation in a dynamic environment like a hybrid workspace.  相似文献   

6.
针对具有基本认知行为能力的行动不便老人、肢残人士、运动和语言患者这类服务对象,构建了助老助残服务机器人人机一体化导航系统。用户与机器人之间进行交互,自由切换随机行走和自主导航2种运动模式,机器人根据现场环境和作业条件的不同,实时触发人机条件响应生成规则而产生相应的行走行为,作业时人机界面同步呈现虚实结合、实时交互的智能空间,实现人机一体化感知、决策与执行。以移动作业服务机器人为对象进行室内人机一体化导航作业,验证了该人机一体化导航系统的可行性。  相似文献   

7.
The lack of compelling content has relegated many promising entertainment technologies to laboratory curiosities. Although mixed-reality techniques show great potential, the entertainment business is not about technology. To penetrate these huge markets, MR technology must become transparent for the content to have full effect. To achieve this goal, we have devised a framework that lets us integrate concepts from disparate areas such as theme parks, theater, and film into a comprehensive research methodology. We believe that our framework, which has already helped us create content for MR entertainment systems, can provide these benefits to other developers as well.  相似文献   

8.
In mixed reality (MR) design review, the aesthetics of a virtual prototype is assessed by integrating a virtual model into a real-world environment and inspecting the interaction between the model and the environment (lighting, shadows and reflections) from different points of view. The visualization of the virtual model has to be as realistically as possible to provide a solid basis for this assessment and interactive rendering speed is mandatory to allow the designer to examine the scene from arbitrary positions. In this article we present a real-time rendering engine specifically tailored to the needs of MR visualization. The renderer utilizes pre-computed radiance transfer to calculate dynamic soft-shadows, high dynamic range images and image-based lighting to capture incident real-world lighting, approximate bidirectional texture functions to render materials with self-shadowing, and frame post-processing filters (bloom filter and an adaptive tone mapping operator). The proposed combination of rendering techniques provides a trade-off between rendering quality and required computing resources which enables high quality rendering in mobile MR scenarios. The resulting image fidelity is superior to radiosity-based techniques because glossy materials and dynamic environment lighting with soft-shadows are supported. Ray tracing-based techniques provide higher quality images than the proposed system, but they require a cluster of computers to achieve interactive frame rates which prevents these techniques from being used in mobile MR (especially outdoor) scenarios. The renderer was developed in the European research project IMPROVE (FP6-IST-2-004785) and is currently extended in the MAXIMUS project (FP7-ICT-1-217039) where hybrid rendering techniques which fuse PRT and ray tracing are developed.  相似文献   

9.
Gathering a group of remotely located engineers to design a vehicle can be difficult, especially if they live in different countries. To overcome this obstacle, we-a team at the National Center for Supercomputing Applications (NCSA) in the US in partnership with Germany's National Research Center for Information Technology (GMD) developed a collaborative virtual prototyping system for Caterpillar. The Virtual Prototyping System (VPS) will let engineers in Belgium and the US work together on vehicle designs using distributed virtual reality. The system supports collaborative design review and interactive redesign. Integrated real-time video transmissions let engineers see other participants in a shared virtual environment at each remote site's viewpoint position and orientation. Any number of remotely located sites may join in the shared VE, communicating via multicast. The system has been tested with three sites at NCSA  相似文献   

10.
Photo‐realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods for advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state‐of‐the‐art in this field, and presents a categorization and comparison of current methods. Our in‐depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.  相似文献   

11.
An augmented reality interface to contextual information   总被引:1,自引:0,他引:1  
In this paper, we report on a prototype augmented reality (AR) platform for accessing abstract information in real-world pervasive computing environments. Using this platform, objects, people, and the environment serve as contextual channels to more information. The user??s interest with respect to the environment is inferred from eye movement patterns, speech, and other implicit feedback signals, and these data are used for information filtering. The results of proactive context-sensitive information retrieval are augmented onto the view of a handheld or head-mounted display or uttered as synthetic speech. The augmented information becomes part of the user??s context, and if the user shows interest in the AR content, the system detects this and provides progressively more information. In this paper, we describe the first use of the platform to develop a pilot application, Virtual Laboratory Guide, and early evaluation results of this application.  相似文献   

12.
Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi‐user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
A multimodal virtual reality interface for 3D interaction with VTK   总被引:1,自引:1,他引:1  
The object-oriented visualization Toolkit (VTK) is widely used for scientific visualization. VTK is a visualization library that provides a large number of functions for presenting three-dimensional data. Interaction with the visualized data is controlled with two-dimensional input devices, such as mouse and keyboard. Support for real three-dimensional and multimodal input is non-existent. This paper describes VR-VTK: a multimodal interface to VTK on a virtual environment. Six degree of freedom input devices are used for spatial 3D interaction. They control the 3D widgets that are used to interact with the visualized data. Head tracking is used for camera control. Pedals are used for clutching. Speech input is used for application commands and system control. To address several problems specific for spatial 3D interaction, a number of additional features, such as more complex interaction methods and enhanced depth perception, are discussed. Furthermore, the need for multimodal input to support interaction with the visualization is shown. Two existing VTK applications are ported using VR-VTK to run in a desktop virtual reality system. Informal user experiences are presented. Arjan J. F. Kok is an assistant professor at the Department of Computer Science at the Open University of the Netherlands. He studied Computer Science at the Delft University of Technology, The Netherlands. He received his Ph.D. from the same university. He worked as a Scientist for TNO (Netherlands Organization for Applied Scientific Research) and as assistant professor at the Eindhoven University of Technology before he joined the Open University. His research interests are visualization, virtual reality, and computer graphics. Robert van Liere studied Computer Science at the Delft University of Technology, the Netherlands. He received his Ph.D. with the thesis “Studies in Interactive Scientific Visualization” at the University of Amsterdam. Since 1985, he has worked at CWI, the Center for Mathematics and Computer Science in Amsterdam in which he is the head of CWI’s visualization research group. Since 2004, he holds a part-time position as full professor at the Eindhoven University of Technology. His research interests are in interactive data visualization and virtual reality. He is a member of IEEE.  相似文献   

14.
In this paper, a novel AR interface is proposed that provides generic solutions to the tasks involved in augmenting simultaneously different types of virtual information and processing of tracking data for natural interaction. Participants within the system can experience a real-time mixture of 3D objects, static video, images, textual information and 3D sound with the real environment. The user-friendly AR interface can achieve maximum interaction using simple but effective forms of collaboration based on the combinations of human–computer interaction techniques. To prove the feasibility of the interface, the use of indoor AR techniques are employed to construct innovative applications and demonstrate examples from heritage to learning systems. Finally, an initial evaluation of the AR interface including some initial results is presented.  相似文献   

15.
Universal Access in the Information Society - Global collaboration is the major trend in the architecture, engineering, and construction industry; training in global engineering collaboration thus...  相似文献   

16.
In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.  相似文献   

17.
This paper introduces a mixed reality workspace that allows users to combine physical and computer-generated artifacts, and to control and simulate them within one fused world. All interactions are captured, monitored, modeled and represented with pseudo-real world physics. The objective of the presented research is to create a novel system in which the virtual and physical world would have a symbiotic relationship. In this type of system, virtual objects can impose forces on the physical world and physical world objects can impose forces on the virtual world. Virtual Bounds is an exploratory study allowing a physical probe to navigate a virtual world while observing constraints, forces, and interactions from both worlds. This scenario provides the user with the ability to create a virtual environment and to learn to operate real-life probes through its virtual terrain.  相似文献   

18.
This paper describes the application of a mixed-evaluation method, published elsewhere, to three different learning scenarios. The method defines how to combine social network analysis with qualitative and quantitative analysis in order to study participatory aspects of learning in CSCL contexts. The three case studies include a course-long, blended learning experience evaluated as the course develops; a course-long, distance learning experience evaluated at the end of the course; and a synchronous experience of a few hours duration. These scenarios show that the analysis techniques and data collection and processing tools are flexible enough to be applied in different conditions. In particular, SAMSA, a tool that processes interaction data to allow social network analysis, is useful with different types of interactions (indirect asynchronous or direct synchronous interactions) and different data representations. Furthermore, the predefined types of social networks and indexes selected are shown to be appropriate for measuring structural aspects of interaction in these CSCL scenarios. These elements are usable and their results comprehensible by education practitioners. Finally, the experiments show that the mixed-evaluation method and its computational tools allow researchers to efficiently achieve a deeper and more reliable evaluation through complementarity and the triangulation of different data sources. The three experiments described show the particular benefits of each of the data sources and analysis techniques.  相似文献   

19.
This paper describes HyperMem, a system to store and replay user experiences in mixed environments. The experience is stored as a set of hypermedia nodes and links, with the information that was displayed along with the video of the real world that was navigated. It uses a generic hypermedia model, implemented as software components, to handle mixed reality environments. This model includes components for storing and replaying experiences and integrating them in the overall set of hypermedia graphs that can be accessed by a given user. The paper presents the goals of the system, the underlying hypermedia model, the application scenarios, and the architecture and tools for replaying and repurposing stored information.  相似文献   

20.
In this paper we present our work in the design of ubiquitous social experiences, aiming to foster group participation and spontaneous playful behaviours in a city environment. We outline our approach of design for emergence: to provide just enough of a game context and challenge for people to be creative, to extend and enrich the experience of play through their interaction in the real world. CitiTag is our mixed reality testbed, a wireless location-based multiplayer game based on the concept of playground ‘tag’. We describe the design and implementation of CitiTag and discuss results from two user studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号