首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The past few years have witnessed a dramatic growth in the number and variety of graphics intensive mobile applications, which allow users to interact and navigate through large scenes such as historical sites, museums and virtual cities. These applications support many clients and impose a heavy requirement on network resources and computational resources. One key issue in the design of cost efficient mobile walkthrough applications is the data transmission between servers and mobile client devices. In this paper, we propose an effective progressive mesh transmission framework that stores and divide scene objects into different resolutions. In this approach, each mobile device progressively receives and processes only the object’s details matching its display resolution which improves the overall system’s response time and the user’s perception. A fine grained cache mechanism is used to keep the most frequently requested objects’ details in the device memory and consequently reduce the network traffic. Experiments, in simulated and real world environment, are used to illustrate the effectiveness of the proposed framework under various settings of the virtual scene and mobile device configuration. Experimental results show that the proposed framework can improve the walkthrough system performance in mobile devices, with a relatively small overhead.  相似文献   

2.
Kwak  Suhwan  Choe  Jongin  Seo  Sanghyun 《Multimedia Tools and Applications》2020,79(23-24):16141-16154

Rapid developments in augmented reality (AR) and related technologies have led to increasing interest in immersive content. AR environments are created by combining virtual 3D models with a real-world video background. It is important to merge these two worlds seamlessly if users are to enjoy AR applications, but, all too often, the illumination and shading of virtual objects is not consider the real world lighting condition or does not match that of nearby real objects. In addition, visual artifacts produced when blending real and virtual objects further limit realism. In this paper, we propose a harmonic rendering technique that minimizes the visual discrepancy between the real and virtual environments to maintain visual coherence in outdoor AR. To do this, we introduce a method of estimating and approximating the Sun’s position and the sunlight direction to estimate the real sunlight intensity, as this is the most significant illumination source in outdoor AR and it provides a more realistic lighting environment for such content, reducing the mismatch between real and virtual objects.

  相似文献   

3.
黄俊  景红 《计算机系统应用》2015,24(10):259-263
最新体感设备Leap Motion的面世提供给用户一种全新的体验, 即通过跟踪探测动态手势可以进行体感游戏、虚拟演奏、凌空绘画等的非接触式人机交互. 文章首先对Leap Motion的技术特点进行介绍, 并对同类型设备进行对比总结, 介绍了Leap Motion的相关应用和发展前景. 文章分析了Leap Motion的原理和技术基础, 然后提出基于Leap Motion的手势控制技术, 最后以一个基于Unity 3D的手势控制虚拟场景中的物品运动的具体实例, 对Leap Motion手势控制技术的实现进行了细节介绍.  相似文献   

4.
A new vision-based framework and system for human–computer interaction in an Augmented Reality environment is presented in this article. The system allows the users to interact with computer-generated virtual objects using their hands directly. With an efficient color segmentation algorithm, the system is adaptable to different light conditions and backgrounds. It is also suitable for real-time applications. The dominant features on the palm are detected and tracked to estimate the camera pose. After the camera pose relative to the user's hand has been reconstructed, 3D virtual objects can be augmented naturally onto the palm for the user to inspect and manipulate. With off-the-shelf web camera and computer, natural bare-hand based interactions with 2D and 3D virtual objects can be achieved with low cost.  相似文献   

5.
Notwithstanding the recent diffusion of the stereoscopic 3D technologies for the development of powerful human computer interaction systems based on augmented reality environment, with the conventional approaches an observer freely moving in front of a 3D display could experience a misperception of the depth and of the shape of virtual objects. Such distortions can cause eye fatigue and stress for entertainment applications, and they can have serious consequences in scientific and medical fields, where a veridical perception of the scene layout is required. We propose a novel technique to obtain augmented reality systems capable to correctly render 3D virtual objects to an observer that changes his/her position in the real world and acts in the virtual scenario. By tracking the positions of the observer’s eyes, the proposed technique generates the correct virtual view points through asymmetric frustums, thus obtaining the correct left and right projections on the screen. The natural perception of the scene layout is assessed through three experimental sessions with several observers.  相似文献   

6.
One of our purposes is to develop virtual creatures which can acquire behaviors such as building structural objects in a 3D physical simulation. In this article, we show the influences of behavior on structural objects which are built by virtual creatures. Many creatures can change their environment for the better by building structural objects, for example, spiders’ nests. In the field of artificial life, there are many studies of virtual creatures which change their bodies and behavior to suit their environments. In contrast, there are few studies about virtual creatures which build structural objects. As for natural creatures, virtual creatures need a physical interaction between their body and their environment. Therefore, our purpose is to develop a framework for the autonomous acquisition of behaviors which build structural objects in a 3D physical simulation. In order to do this, we first studied the evolutionary acquisition of behavior for building structural objects, e.g., a nest for predation, by the simple behavior of throwing blocks. As a result, we show the possibility that virtual creatures can acquire building behavior evolutionarily.  相似文献   

7.
In the ubiquitous computing environment, people will interact with everyday objects (or computers embedded in them) in ways different from the usual and familiar desktop user interface. One such typical situation is interacting with applications through large displays such as televisions, mirror displays, and public kiosks. With these applications, the use of the usual keyboard and mouse input is not usually viable (for practical reasons). In this setting, the mobile phone has emerged as an excellent device for novel interaction. This article introduces user interaction techniques using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared displays. In particular, we consider two specific but typical situations (1) sharing the display from a distance and (2) interacting with a touch screen display at a close distance. Using two basic computer vision techniques, motion flow and marker recognition, we show how a camera-equipped hand-held device can effectively be used to replace a mouse and share, select, and manipulate 2D and 3D objects, and navigate within the environment presented through the large display.  相似文献   

8.
Many interface devices for virtual reality provide full immersion inside the virtual environment. This is appropriate for numerous applications that emphasize navigating through a virtual space. However, a large class of problems exists for which navigation is not the critical issue. Rather, these applications demand a fine-granularity visualization and interaction with virtual objects and scenes. This applies to a host of other applications typically performed on a desktop, table or workbench. Responsive Workbench technology offers a new way to develop virtual environments for this rather sizable class of applications. The Responsive Workbench operates by projecting a computer-generated, stereoscopic image off a mirror and through a table surface. Using stereoscopic shuttered glasses, users observe a 3D image displayed above the tabletop. By tracking the group leader's head and hand movements, the Responsive Workbench permits changing the view angle and interacting with the 3D scene. Other group members observe the scene as the group leader manipulates it, facilitating communication among observers. Typical methods for interacting with virtual objects on the workbench include speech recognition, gesture recognition and a simulated laser pointer (stylus). This article features Responsive Workbench applications from four institutions that have pioneered this technology. The four applications are: visualization, situational awareness, collaborative production modeling and planning, and a virtual windtunnel  相似文献   

9.
Non-uniform rational B-splines (NURBS) has been widely accepted as a standard tool for geometry representation and design. Its rich geometric properties allow it to represent both analytic shapes and free-form curves and surfaces precisely. Moreover, a set of tools is available for shape modification or more implicitly, object deformation. Existing NURBS rendering methods include de Boor algorithm, Oslo algorithm, Shantz’s adaptive forward differencing algorithm and Silbermann’s high speed implementation of NURBS. However, these methods consider only speeding up the rendering process of individual frames. Recently, Kumar et al. proposed an incremental method for rendering NURBS surfaces, but it is still limited to static surfaces. In real-time applications such as virtual reality, interactive display is needed. If a virtual environment contains a lot of deforming objects, these methods cannot provide a good solution. In this paper, we propose an efficient method for interactive rendering of deformable objects by maintaining a polygon model of each deforming NURBS surface and adaptively refining the resolution of the polygon model. We also look at how this method may be applied to multi-resolution modelling.  相似文献   

10.
Advanced interaction techniques in virtual environments   总被引:4,自引:0,他引:4  
Fundamental to much of the Virtual Reality work is, in addition to high-level 3D graphical and multimedia scenes, research on advanced methods of interaction. The “visitor” of such virtual worlds must be able to act and behave intuitively, as he would in everyday situations, as well as receive expectable natural behaviour presented as feedback from the objects in the environment, in a way that he/she has the feeling of direct interaction with his/her application. In this paper we present several techniques to enrich the naturalness and enhance the user involvement in the virtual environment. We present how the user is enabled to grab objects without using any specific and elaborate hand gesture, which is more intuitive and close to the way humans are used to do. We also introduce a technique that makes it possible for the user to surround objects without any force-feedback interaction device. This technique allows the user to surround or “walk” with the virtual hand on the object's surface and look for the best position to grab it.  相似文献   

11.
Toward spontaneous interaction with the Perceptive Workbench   总被引:1,自引:0,他引:1  
Until now, we have interacted with computers mostly by using wire-based devices. Typically, the wires limit the distance of movement and inhibit freedom of orientation. In addition, most interactions are indirect. The user moves a device as an analog for the action created in the display space. We envision an untethered interface that accepts gestures directly and can accept any objects we choose as interactors. We discuss methods for producing more seamless interaction between the physical and virtual environments through the Perceptive Workbench. We applied the system to an augmented reality game and a terrain navigating system. The Perceptive Workbench can reconstruct 3D virtual representations of previously unseen real-world objects placed on its surface. In addition, the Perceptive Workbench identifies and tracks such objects as they are manipulated on the desk's surface and allows the user to interact with the augmented environment through 2D and 3D gestures  相似文献   

12.
Research has been conducted on how to aid blind peoples’ perceptions and cognition of scientific data and, specifically, on how to strengthen their background in mathematics as a means of accomplishing this goal. In search of alternate modes to vision, researchers and practitioners have studied the opportunities of haptics alone and in combination with other modes, such as audio. What is already known, and has motivated research in this area, is that touch and vision might form a common brain representation that is shared between the visual and haptic modalities and through haptics learning is active rather than passive. In spite of extensive research on haptics in the areas of psychology and neuropsychology, recent advances and rare experiences in using haptic technology have not caused a transfer from basic knowledge in the area of haptics to learning applications and practical guidelines on how to develop such applications. Thus motivated, this study investigates different haptic effects, such as free space, magnetic effects and the bounded box when blind people are given the task of recognising and manipulating classes of 3D objects with which they have varying familiarity. In parallel, this study investigates the applicability of Sjöström’s guidelines on haptic applications development and uses his problem classification to capture knowledge from the experiments. The results of this study show that users can easily recognise and manipulate familiar objects, albeit with some assistance. There is an indication that users completed tasks faster and needed less assistance with magnetic effects. However, they were not as satisfied with this mode. While the results of this study show that haptics have the potential to allow students to conceptualise 3D objects, much more work is needed to exploit this technology to the fullest. Objects with higher complexity are difficult for students, and, in their opinion, the virtual objects (as presented) leave much room for improvement. Sjöström’s error taxonomy proved useful, and four of five sub-guidelines tested were confirmed to be useful in this study.  相似文献   

13.
This article addresses the problem of creating interactive mixed reality applications where virtual objects interact in images of real world scenarios. This is relevant to create games and architectural or space planning applications that interact with visual elements in the images such as walls, floors and empty spaces. These scenarios are intended to be captured by the users with regular cameras or using previously taken photographs. Introducing virtual objects in photographs presents several challenges, such as pose estimation and the creation of a visually correct interaction between virtual objects and the boundaries of the scene. The two main research questions addressed in this article include, the study of the feasibility of creating interactive augmented reality (AR) applications where virtual objects interact in a real world scenario using the image detected high-level features and, also, verifying if untrained users are capable and motivated enough to perform AR initialization steps. The proposed system detects the scene automatically from an image with additional features obtained using basic annotations from the user. This operation is significantly simple to accommodate the needs of non-expert users. The system analyzes one or more photos captured by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create mixed and augmented reality applications where the user interactively introduces virtual objects that blend with the picture in real time and respond to the physical environment. To validate the solution several system tests are described and compared using available external image datasets.  相似文献   

14.
This paper proposes an augmented reality content authoring system that enables ordinary users who do not have programming capabilities to easily apply interactive features to virtual objects on a marker via gestures. The purpose of this system is to simplify augmented reality (AR) technology usage for ordinary users, especially parents and preschool children who are unfamiliar with AR technology. The system provides an immersive AR environment with a head-mounted display and recognizes users’ gestures via an RGB-D camera. Users can freely create the AR content that they will be using without any special programming ability simply by connecting virtual objects stored in a database to the system. Following recognition of the marker via the system’s RGB-D camera worn by the user, he/she can apply various interactive features to the marker-based AR content using simple gestures. Interactive features applied to AR content can enlarge, shrink, rotate, and transfer virtual objects with hand gestures. In addition to this gesture-interactive feature, the proposed system also allows for tangible interaction using markers. The AR content that the user edits is stored in a database, and is retrieved whenever the markers are recognized. The results of comparative experiments conducted indicate that the proposed system is easier to use and has a higher interaction satisfaction level than AR environments such as fixed-monitor and touch-based interaction on mobile screens.  相似文献   

15.
对传统增强现实系统中虚拟物体与真实物体难以进行碰撞交互的问题,提出一种对深度图像中的场景进行分割,并基于分割结果构建代理几何体的方法来实现虚、实物体的碰撞交互。采用Kinect等深度获取设备获取当前真实场景的彩色图像信息和深度图像信息;通过深度图像的法向聚类及平面拟合技术来识别出场景中的主平面区域;对除去主平面区域的其他聚类点云区域进行融合处理,得到场景中的其他主要物体区域;为识别出的主平面构建虚拟平面作为该平面的代理几何体,为分割出的物体构建包围盒来作为其代理几何体。将这些代理几何体叠加到真实物体上,并对之赋予物理属性,即可模拟实现虚拟物体与真实物体的碰撞交互。实验结果表明,该方法可有效分割简单场景,从而实现虚实交互。  相似文献   

16.
Physics-based fluid interaction plays an important role in computer animation, with wide applications in virtual reality, computer games, digital entertainment, etc. For example, in virtual reality education and games, we often need fluid interactions like acting as an alchemist to create a potion by stirring fluid in a crucible. The traditional input devices such as a mouse and keyboard can basically input 2D information without feedback. In recent years, the continuous development of haptic device not only can achieve six degrees-of-freedom input, but also can calculate the force in virtual scenes and feedback to the user to make a better virtual experience. How to use haptic device in different kinds of virtual fluid scenarios to provide better experience is an important issue in the field of virtual reality. On the other hand, the researches on multiple-fluid interaction especially based on smoothed particle hydrodynamics (SPH) method are very lacking. Therefore, we study the key techniques of haptic interaction with SPH multiple-fluid to compensate this defect in computer graphics community. Different from the single-phase flow, interaction with multiple-fluid flow has difficulties in the realization of properties of different phases. After adding the multiple-fluid simulation, it is also important to keep haptic interaction real time. Our research is based on the mixture model. We guarantee the authenticity of multiple-fluid mixing effect while changing the drift velocity solver to improve efficiency. We employ a unified particle model to achieve rigid body–liquid coupling, and use FIR filter to smooth feedback force to the haptic device. Our novel multiple-fluid haptic simulation can provide an interactive experience for mixing liquid in virtual reality.  相似文献   

17.
沉浸式虚拟装配中物体交互特征建模方法研究   总被引:1,自引:0,他引:1  
沉浸式虚拟装配是一种“人在回路”的虚拟现实系统,可用于大型设备的设计及装配人员的培训等领域.作为其关键技术之一,如何实现“真实”人对“虚拟”物体的操作,仍是一项具有挑战性的工作.针对这一问题,提出了一种基于交互区和状态机的物体交互特征建模方法.通过交互区,虚拟物体可理解用户的操作意图,并据此发生相应的状态转换,进而正确监测用户的交互行为.该方法可实现包括使用工具在内的复杂装配操作以及对装配流程的管理控制.通过在实际项目中的应用验证了该方法的有效性.  相似文献   

18.
The localization of the components of an object near to a device before obtaining the real interaction is usually determined by means of a proximity measurement to the device of the object’s features. In order to do this efficiently, hierarchical decompositions are used, so that the features of the objects are classified into several types of cells, usually rectangular.In this paper we propose a solution based on the classification of a set of points situated on the device in a little-known spatial decomposition named tetra-tree. Using this type of spatial decomposition gives us several quantitative and qualitative properties that allow us a more realistic and intuitive visual interaction, as well as the possibility of selecting inaccessible components. These features could be used in virtual sculpting or accessibility tasks.In order to show these properties we have compared an interaction system based on tetra-trees to one based on octrees.  相似文献   

19.
Vision-based 3D hand tracking is a key and popular component for interaction studies in a broad range of domains such as virtual reality (VR), augmented reality (AR) and natural human-computer interaction (HCI). While this research field has been well studied in the last decades, most approaches have considered the human hand in isolation and not in action or in interaction with the surrounding environment. Even the common collaborative and strong interactions with the other hand have been ignored. However, many of today's computer applications require more and more hand-object interactions. Furthermore, employing contextual information about the object in the hand (e.g. the shape, the texture, and the pose) can remarkably constrain the tracking problem. The most studied contextual constraints involve interaction with real objects and not with virtual objects which is still a very big challenge. The goal of this survey is to develop an up-to-date taxonomy of the state-of-the-art vision-based hand pose estimation and tracking methods with a new classification scheme: hand-object interaction constraints. This taxonomy allows us to examine the strengths and weaknesses of the current state of the art and to highlight future trends in the domain.  相似文献   

20.
When giving directions to the location of an object, people typically use other attractive objects as reference, that is, reference objects. With the aim to select proper reference objects, useful for locating a target object within a virtual environment (VE), a computational model to identify perceptual saliency is presented. Based on the object’s features with the major stimulus for the human visual system, three basic features of a 3D object (i.e., color, size, and shape) are individually evaluated and then combined to get a degree of saliency for each 3D object in a virtual scenario. An experiment was conducted to evaluate the extent to which the proposed measure of saliency matches with the people’s subjective perception of saliency; the results showed a good performance of this computational model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号