首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Traditional display systems usually display 3D objects on static screens (monitor, wall, etc.) and the manipulation of virtual objects by the viewer is usually achieved via indirect tools such as keyboard or mouse. It would be more natural and direct if we display the object onto a handheld surface and manipulate it with our hands as if we were holding the real 3D object. In this paper, we propose a prototype system by projecting the object onto a handheld foam sphere. The aim is to develop an interactive 3D object manipulation and exhibition tool without the viewer having to wear spectacles. In our system, the viewer holds the sphere with his hands and moves it freely. Meanwhile we project well-tailored images onto the sphere to follow its motion, giving the viewer a virtual perception as if the object were sitting inside the sphere and being moved by the viewer. The design goal is to develop a low-cost, real-time, and interactive 3D display tool. An off-the-shelf projector-camera pair is first calibrated via a simple but efficient algorithm. Vision-based methods are proposed to detect the sphere and track its subsequent motion. The projection image is generated based on the projective geometry among the projector, sphere, camera and the viewer. We describe how to allocate the view spot and warp the projection image. We also present the result and the performance evaluation of the system.  相似文献   

2.
3.
We propose a 3D interaction and autostereoscopic display system that use gesture recognition, which can manipulate virtual objects in the scene directly by hand gestures and can display objects in 3D stereoscopy. The system consists of a gesture recognition and manipulation part as well as an autostereoscopic display as an interactive display part. To manipulate the 3D virtual scene, a gesture recognition algorithm is proposed, which use spatial‐temporal sequences of feature vectors to match predefined gestures. To get smooth 3D visualization, we utilize the programmable graphics pipeline in graphic processing unit to accelerate data processing. We develop a prototype system for 3D virtual exhibition. The prototype system reaches frame rates of 60 fps and operates efficiently with a mean recognition accuracy of 90%.  相似文献   

4.
We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors.  相似文献   

5.
We present a prototype multi-agent system whose goal is to support a 3D application for e-retailing. The prototype demonstrates how the use of agent environments can be amongst the most promising and flexible approaches to engineer e-retailing applications. We illustrate this point by showing how the agent environment GOLEM supports social interactions and how it combines them with semantic-web technologies to develop the e-retailing application. We also describe the features of GOLEM that allow a user to engage in e-retailing activities in order to explore the virtual social environment by searching and dynamically discovering new agents, products and services.  相似文献   

6.
Augmented Reality is an emerging technology that seeks to enhance a user’s view by overlaying graphical information. We developed a prototype AR system geared for medical applications. It is built around a stereoscopic head-mounted display of the video-see-through variety. The newest generation of this prototype system exhibits high performance on a standard PC platform. Stereoscopic video images are augmented with medical graphics in real-time at 30 frames per second and with XGA (1024× 768) resolution. The system provides a compelling AR perception: the graphics appears firmly anchored in the scene—there is no time lag between video and graphics or any apparent jitter of the graphics. With the head-mounted display, the user has a natural and direct access to understanding the 3D structure of the scene, based on both stereo and kinetic depth cues. In the present paper, we describe in detail the architecture and several features of the AR prototype system. Head tracking is accomplished with a single-camera system, with the dedicated tracker camera placed on the head-mounted display. This configuration is the foundation of achieving a high-accuracy graphics overlay. We are now exploring the use of the prototype system for a variety of medical applications. This paper gives an overview over the pre-clinical tests that we have performed for interventional guidance. Overall, the feedback has been very positive and encouraging, and we are continuing to work towards realizing the clinical potential of the technology.  相似文献   

7.
Interactive Illustrative Rendering on Mobile Devices   总被引:1,自引:0,他引:1  
Scientists, engineers, and artists regularly use illustrations in design, training, and education to display conceptual information, describe problems, and solve those problems. Researchers have developed many advanced rendering techniques on desktop platforms to facilitate illustration generation, but adapting these techniques to mobile platforms has not been easy. We discuss how advanced illustrative rendering techniques, such as interactive cutaway views, ghosted views, silhouettes, and selective rendering, have been adapted to mobile devices. We also present MobileVis, our interactive, illustrative 3D graphics and text rendering system that lets users explore 3D models' interior structures, display parts annotations, and visualize instructions, such as assembly and disassembly procedures for mechanical models  相似文献   

8.
9.
Building and using a scalable display wall system   总被引:4,自引:0,他引:4  
Princeton's scalable display wall project explores building and using a large-format display with commodity components. The prototype system has been operational since March 1998. Our goal is to construct a collaborative space that fully exploits a large-format display system with immersive sound and natural user interfaces. Our prototype system is built with low-cost commodity components: a cluster of PCs, PC graphics accelerators, consumer video and sound equipment, and portable presentation projectors. This approach has the advantages of low cost and of tracking technology well, as high-volume commodity components typically have better price-performance ratios and improve at faster rates than special-purpose hardware. We report our early experiences in building and using the display wall system. In particular, we describe our approach to research challenges in several specific research areas, including seamless tiling, parallel rendering, parallel data visualization, parallel MPEG decoding, layered multiresolution video input, multichannel immersive sound, user interfaces, application tools, and content creation  相似文献   

10.
Computed tomography (CT) generates cross-sectional images of the body. Visualizing CT images has been a challenging problem. The emergence of the augmented and virtual reality technology has provided promising solutions. However, existing solutions suffer from tethered display or wireless transmission latency. In this paper, we present ARSlice, a proof-of-concept prototype that can visualize CT images in an untethered manner without wireless transmission latency. Our ARSlice prototype consists of two parts, the user end and the projector end. By employing dynamic tracking and projection, the projector end can track the user-end equipment and project CT images onto it in real time. The user-end equipment is responsible for displaying these CT images into the 3D space. Its main feature is that the user-end equipment is a pure optical device with light weight, low cost, and no energy consumption. Our experiments demonstrate that our ARSlice prototype provides part of six degrees of freedom for the user, and a high frame rate. By interactively visualizing CT images into the 3D space, our ARSlice prototype can help untrained users better understand that CT images are slices of a body.  相似文献   

11.
This paper presents the syllabus for an introductory computer graphics course that emphasizes the use of programmable shaders while teaching raster-level algorithms at the same time. We describe a Java-based framework that is used for programming assignments in this course. This framework implements a shader-enabled software renderer and an interactive 3D editor. Teaching shader programming in concert with the low-level graphics pipeline makes it easier for our students to learn modern OpenGL with shaders in our follow-up intermediate course. We also show how to create attractive course material by using COLLADA, an open standard for 3D content exchange, and our approach to organizing the practical course.  相似文献   

12.
Many protocols optimized to transmissions over wireless networks have been proposed. However, one issue that has not been looked into is considering human perception in deciding a transmission strategy for three-dimensional (3D) objects. Several factors, such as the number of vertices and the resolution of texture, can affect the display quality of 3D objects. When the resources of a graphics system are not sufficient to render the ideal image, degradation is inevitable. It is therefore important to study how individual factors affect the overall quality, and how the degradation can be controlled given limited bandwidth resources and possibility of data loss. In this paper, the essential factors determining the display quality are reviewed. We provide an overview of our research on designing a 3D perceptual quality metric integrating two important ones, resolution of texture and resolution of mesh, that control the transmission bandwidth requirements. A review of robust mesh transmission considering packet loss is presented, followed by a discussion of the difference of existing literature with our problem and approach. We then suggest alternative strategies for packet transmission of both 3D texture and mesh. These strategies are then compared with respect to preserving 3D perceptual quality under packet loss  相似文献   

13.
A new requirements-based programming approach to the engineering of computer-based systems offers not only an underlying formalism, but also full formal development from requirements capture through to the automatic generation of provably-correct code. The method, Requirements-to-Design-to-Code (R2D2C), is directly applicable to the development of autonomous systems and systems having autonomic properties. We describe both the R2D2C method and a prototype tool that embodies the method, and illustrate the applicability of the method by describing how the prototype tool could be used in the development of LOGOS, a NASA autonomous ground control system that exhibits autonomic behavior. Finally, we briefly discuss other possible areas of application of the approach.  相似文献   

14.
Approach to achieve self‐calibration three‐dimensional (3D) light field display is investigated in this paper. The proposed 3D light field display is constructed up on spliced multi‐LCDs, lens and diaphragm arrays, and directional diffuser. The light field imaging principle, hardware configuration, diffuser characteristic, and image reconstruction simulation are described and analyzed, respectively. Besides the light field imaging, a self‐calibration method is proposed to improve the imaging performance. An image sensor is deployed to capture calibration patterns projected onto and then reflected by the polymer dispersed liquid crystal film, which is attached to and shaped the diffuser. These calibration components are assembled with the display unit and can be switched between display mode and calibration mode. In the calibration mode, the imperfect imaging relations of optical components are captured and calibrated automatically. We demonstrate our design by implementing the prototype of proposed 3D light field display by using modified off‐the‐shelf products. The proposed approach successfully meets the requirement of real application on scalable configuration, fast calibration, large viewing angular range, and smooth motion parallax.  相似文献   

15.
3D stereo interactive medical visualization   总被引:1,自引:0,他引:1  
Our interactive, 3D stereo display helps guide clinicians during endovascular procedures, such as intraoperative needle insertion and stent placement relative to the target organs. We describe a new method of guiding endovascular procedures using interactive 3D stereo visualizations. We use as an example the transjugular intrahepatic portosystemic shunt (TIPS) procedure. Our goal is to increase the speed and safety of endovascular procedures by providing the interventionalist with 3D information as the operation proceeds. Our goal is to provide 3D image guidance of the TIPS procedure so that the interventionalist can readily adjust the needle position and trajectory to reach the target on the first pass. We propose a 3D stereo display of the interventionalist's needle and target vessels. We also add interactivity via head tracking so that the interventionalist gains a better 3D sense of the relationship between the target vessels and the needle during needle advancement.  相似文献   

16.
多视点自动立体显示有望成为今后主流的三维显示技术,它是一种无需借助任何辅助观察设备的多视点、多观察区、高分辨率、显示效果逼真的三维显示方式。阐述了基于多投影的多视点自动立体显示系统的设计原理,详细地描述了系统的软硬件构架,建立了基于多投影仪和水平光学各向异性显示结构的自动立体显示样机,开发了投影仪阵列自动校准系统,提高了投影仪的校准精度,避免了因投影仪数目多而导致的繁琐的校准过程。实验结果能够给观众带来逼真的三维视觉体验。  相似文献   

17.
We present a threads and halos representation for interactive volume rendering of vector-field structure and describe a number of additional components that combine to create effective visualizations of multivalued 3D scientific data. After filtering linear structures, such as flow lines, into a volume representation, we use a multilayer volume rendering approach to simultaneously display this derived volume along with other data values. We demonstrate the utility of threads and halos in clarifying depth relationships within dense renderings and we present results from two scientific applications: visualization of second-order tensor valued magnetic resonance imaging (MRI) data and simulated 3D fluid flow data. In both application areas, the interactivity of the visualizations proved to be important to the domain scientists. Finally, we describe a PC-based implementation of our framework along with domain specific transfer functions, including an exploratory data culling tool, that enable fast data exploration.  相似文献   

18.
Finding trading patterns in stock market data   总被引:1,自引:0,他引:1  
This article describes our design and evaluation of a multisensory human perceptual tool for the real-world task domain of stock market trading. The tool is complementary in that it displays different information to different senses - our design incorporates both a 3D visual and a 2D sound display. The results of evaluating the tool in a formal experiment are complex. The data mined in this case study is bid-and-ask data - also called depth-of-market data - from the Australian Stock Exchange. Our visual-auditory display is the bid-ask-land-scape, which we developed over much iteration with the close collaboration of an expert in the stock market domain. From this domain's perspective, the project's principal goal was to develop a tool to help traders uncover new trading patterns in depth-of-market data. In this article, we not only describe the design of the bid-ask-landscape but also report on a formal evaluation of this visual-auditory display. We tested nonexperts on their ability to use the tool to predict the future direction of stock prices.  相似文献   

19.
20.
This paper illustrates the cooperation between Image Processing and Computer Graphics. We present a new method for computing realistic 3D images of buildings or of complex objects from a set of real images and from the 3D model of the corresponding real scene. We show also how to remove the real shadows from these images and how to simulate new lighting. Our system can be used to generate synthetic images, with total control over the position of the camera, over the features of the optical system and over the solar lighting. We propose several methods for avoiding most of the artifacts that could be produced by a direct application of our approach. Finally, we propose a general scheme where these images are used to test Image Processing algorithms; a long time before that the first physical prototype is built.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号