首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This article proposes a 3-dimensional (3D) vision-based ambient user interface as an interaction metaphor that exploits a user's personal space and its dynamic gestures. In human-computer interaction, to provide natural interactions with a system, a user interface should not be a bulky or complicated device. In this regard, the proposed ambient user interface utilizes an invisible personal space to remove cumbersome devices where the invisible personal space is virtually augmented through exploiting 3D vision techniques. For natural interactions with the user's dynamic gestures, the user of interest is extracted from the image sequences by the proposed user segmentation method. This method can retrieve 3D information from the segmented user image through 3D vision techniques and a multiview camera. With the retrieved 3D information of the user, a set of 3D boxes (SpaceSensor) can be constructed and augmented around the user; then the user can interact with the system by touching the augmented SpaceSensor. In the user's dynamic gesture tracking, the computational complexity of SpaceSensor is relatively lower than that of conventional 2-dimensional vision-based gesture tracking techniques, because the touched positions of SpaceSensor are tracked. According to the experimental results, the proposed ambient user interface can be applied to various systems that require real-time user's dynamic gestures for their interactions both in real and virtual environments.  相似文献   

3.
This paper presents an extended version of Navidget. Navidget is a new interaction technique for camera positioning in 3D environments. This technique derives from the point-of-interest (POI) approaches where the endpoint of a trajectory is selected for smooth camera motions. Unlike the existing POI techniques, Navidget does not attempt to automatically estimate where and how the user wants to move. Instead, it provides good feedback and control for fast and easy interactive camera positioning. Navidget can also be useful for distant inspection when used with a preview window. This new 3D user interface is totally based on 2D inputs. As a result, it is appropriate for a wide variety of visualization systems, from small handheld devices to large interactive displays. A user study on TabletPC shows that the usability of Navidget is very good for both expert and novice users. This new technique is more appropriate than the conventional 3D viewer interfaces in numerous 3D camera positioning tasks. Apart from these tasks, the Navidget approach can be useful for further purposes such as collaborative work and animation.  相似文献   

4.
In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent a 3D scene in the form of a Layered Depth Image (LDI) which is composed of a foreground layer and a background layer, and each layer has a corresponding texture and depth map. Given user‐specified sparse depth inputs, depth maps are computed based on superpixels using interpolation with geodesic‐distance weighting and an optimization framework. This computation is done immediately, which allows the user to edit the LDI interactively. Additionally, our technique automatically estimates depth and texture in occluded regions using the depth discontinuity. In our interface, the user paints strokes on the 3D model directly. The drawn strokes serve as 3D handles with which the user can pull out or push the 3D surface easily and intuitively with real‐time feedback. We show our technique enables efficient modeling of LDI that produce sufficient 3D effects.  相似文献   

5.
We present design principles for conceiving tangible user interfaces for the interactive physically-based deformation of 3D models. Based on these design principles, we developed a first prototype using a passive tangible user interface that embodies the 3D model. By associating an arbitrary reference material with the user interface, we convert the displacements of the user interface into forces required by physically-based deformation models. These forces are then applied to the 3D model made out of any material via a physical deformation model. In this way, we compensate for the absence of direct haptic feedback, which allows us to use a force-driven physically-based deformation model. A user study on simple deformations of various metal beams shows that our prototype is usable for deformation with the user interface embodying the virtual beam. Our first results validate our design principles, plus they have a high educational value for mechanical engineering lectures.  相似文献   

6.
In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application's success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early '70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.  相似文献   

7.
Large datasets of 3D objects require an intuitive way to browse and quickly explore shapes from the collection. We present a dynamic map of shapes where similar shapes are placed next to each other. Similarity between 3D models exists in a high dimensional space which cannot be accurately expressed in a two dimensional map. We solve this discrepancy by providing a local map with pan capabilities and a user interface that resembles an online experience of navigating through geographical maps. As the user navigates through the map, new shapes appear which correspond to the specific navigation tendencies and interests of the user, while maintaining a continuous browsing experience. In contrast with state of the art methods which typically reduce the search space by selecting constraints or employing relevance feedback, our method enables exploration of large sets without constraining the search space, allowing the user greater creativity and serendipity. A user study evaluation showed a strong preference of users for our method over a standard relevance feedback method.  相似文献   

8.
User interfaces have traditionally followed the WIMP (window, icon, menu, pointer) paradigm. Though functional and powerful, they are usually cumbersome for a novice user to design a complex model, requiring considerable expertise and effort. This paper presents a system for designing geometric models and image deformation with sketching curves, with the use of Green coordinates. In 3D modeling, the user first creates a 3D model by using a sketching interface, where a given 2D curve is interpreted as the projection of the 3D curve. The user can add, remove, and deform these control curves easily, as if working with a 2D line drawing. For a given set of curves, the system automatically identifies the topology and face embedding by applying graph rotation system. Green coordinates are then used to deform the generated models with detail-preserving property. Also, we have developed a sketch-based image-editing interface to deform image regions using Green coordinates. Hardware-assisted schemes are provided for both control shape deformation and the subsequent surface optimization, the experimental results demonstrate that 3D/2D deformations can be achieved in realtime.  相似文献   

9.
Mass Market Applications for Real Time 3D Graphics   总被引:1,自引:0,他引:1  
This paper discusses the applicability of real time 3D image synthesis to mass market products. Likely application areas are covered. There is discussion of the man machine interface problems arising in systems where it has to be assumed that little or no training will be provided. A research prototype package, a kitchen designer which allows a user to design and view a kitchen in real time, is described in some detail. Emphasis is given to aspects of the user interface design such as 3D navigational aids. Finally there is consideration of the system performance level required.  相似文献   

10.
In the past years sophisticated automatic segmentation algorithms for various medical image segmentation problems have been developed. However, there are always cases where automatic algorithms fail to provide an acceptable segmentation. In these cases the user needs efficient segmentation editing tools, a problem which has not received much attention in research. We give a comprehensive overview on segmentation editing for three‐dimensional (3D) medical images. For segmentation editing in two‐dimensional (2D) images, we discuss a sketch‐based approach where the user modifies the segmentation in the contour domain. Based on this 2D interface, we present an image‐based as well as an image‐independent method for intuitive and efficient segmentation editing in 3D in the context of tumour segmentation in computed tomography (CT). Our editing tools have been evaluated on a database containing 1226 representative liver metastases, lung nodules and lymph nodes of different shape, size and image quality. In addition, we have performed a qualitative evaluation with radiologists and technical experts, proving the efficiency of our tools.  相似文献   

11.
Here, we analyze toolkit designs for building graphical applications with rich user interfaces, comparing polylithic and monolithic toolkit-based solutions. Polylithic toolkits encourage extension by composition and follow a design philosophy similar to 3D scene graphs supported by toolkits including JavaSD and Openlnventor. Monolithic toolkits, on the other hand, encourage extension by inheritance, and are more akin to 2D graphical user interface toolkits such as Swing or MFC. We describe Jazz (a polylithic toolkit) and Piccolo (a monolithic toolkit), each of which we built to support interactive 2D structured graphics applications in general, and zoomable user interface applications in particular. We examine the trade offs of each approach in terms of performance, memory requirements, and programmability. We conclude that a polylithic approach is most suitable for toolkit builders, visual design software where code is automatically generated, and application builders where there is much customization of the toolkit. Correspondingly, we find that monolithic approaches appear to be best for application builders where there is not much customization of the toolkit.  相似文献   

12.
This paper describes a CASE shell of a new type, namely, the instrumental system SWS constructed in accordance with a framework data model. A scheme of the instrumental system is proposed. The list of main user functions of the SWS shows that the typification and formalization of requests to a database with a well-formalized data model and unification and minimization of the interface allow modelling the totality of user requests. The results of a numerical experiment on the real-time formation of OLAP data elements are presented. The real-time approach is shown to be most advanced.  相似文献   

13.
We present a sketch-based user interface, which was designed to help novices to create 3D character animations by multi-pass sketching, avoiding the ambiguities usually present in sketch input. Our system also contains sketch-based editing and reproducing tools, which allow paths and motions to be partially updated rather than wholly redrawn; and graphical block interface permits motion sequences to be organized and reconfigured easily. A user evaluation with participants of different skill levels suggest that novices using this sketch interface can produce animations almost as quickly as users who are experienced in 3D animation.  相似文献   

14.
OLAP系统基于查询结构的用户浏览引导   总被引:4,自引:0,他引:4  
联机分析处理(OLAP)系统是数据仓库主要的前端支持工具,在OLAP系统中用户以浏览的方式进行数据访问。通常,OLAP系统用户一般会有相对稳定的信息需求,而OLAP系统中查询的结构一定程度上反映了用户所关心的信息内容,因此,用户执行查询的结构也会保持一定的稳定性。以查询结构为基础,对OLAP系统用户的查询行为进行了分析,提出了一种建立OLAP系统用户轮廓文件的方法,并对如何根据轮廓文件对用户的行为进行预测,并进一步对用户的浏览进行引导的方法进行了探讨。以此为基础,当OLAP系统用户进行信息浏览时,可以在OLAP系统前端,对用户可能感兴趣的地方做出一定的标识,引导用户将要进行的浏览动作,使用户能轻松的完成信息搜索的工作。  相似文献   

15.
Editing and manipulation of existing 3D geometric objects are a means to extend their repertoire and promote their availability. Traditionally, tools to compose or manipulate objects defined by 3D meshes are in the realm of artists and experts. In this paper, we introduce a simple and effective user interface for easy composition of 3D mesh-parts for non-professionals. Our technique borrows from the cut-and-paste paradigm where a user can cut parts out of existing objects and paste them onto others to create new designs. To assist the user attach objects to each other in a quick and simple manner, many applications in computer graphics support the notion of “snapping”. Similarly, our tool allows the user to loosely drag one mesh part onto another with an overlap, and lets the system snap them together in a graceful manner. Snapping is accomplished using our Soft-ICP algorithm which replaces the global transformation in the ICP algorithm with a set of point-wise locally supported transformations. The technique enhances registration with a set of rigid to elastic transformations that account for simultaneous global positioning and local blending of the objects. For completeness of our framework, we present an additional simple mesh-cutting tool, adapting the graph-cut algorithm to meshes.  相似文献   

16.
基于手绘草图的三维CAD系统   总被引:2,自引:0,他引:2  
Post-IMP界面是下一代用户界面的主要范式。该文研究并实现了一种新型的基于笔的手绘草图三维CAD系统。实现手绘草图三维CAD系统的关键是设计一个自然、合理、高效的草图语义表达与获取框架,它要着眼于解决用户在做什么和在哪里做这两方面的问题。该文详细地讨论了草图语义表达和获取框架的设计思路及实现方法,并开发出一个原型系统,该系统以一支手写笔和一块手写板为交互输入设备,能够较自然高效地进行一些复杂的三维实体和三维场景设计。  相似文献   

17.
Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user‐clicked images. Our framework is based on a novel locally‐bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user‐clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness.  相似文献   

18.
On-Line Analytical Processing (OLAP) is a data analysis technique typically used for local and well-prepared data. However, initiatives like Open Data and Open Government bring new and publicly available data on the web that are to be analyzed in the same way. The use of semantic web technologies for this context is especially encouraged by the Linked Data initiative. There is already a considerable amount of statistical linked open data sets published using the RDF Data Cube Vocabulary (QB) which is designed for these purposes. However, QB lacks some essential schema constructs (e.g., dimension levels) to support OLAP. Thus, the QB4OLAP vocabulary has been proposed to extend QB with the necessary constructs and be fully compliant with OLAP. In this paper, we focus on the enrichment of an existing QB data set with QB4OLAP semantics. We first thoroughly compare the two vocabularies and outline the benefits of QB4OLAP. Then, we propose a series of steps to automate the enrichment of QB data sets with specific QB4OLAP semantics; being the most important, the definition of aggregate functions and the detection of new concepts in the dimension hierarchy construction. The proposed steps are defined to form a semi-automatic enrichment method, which is implemented in a tool that enables the enrichment in an interactive and iterative fashion. The user can enrich the QB data set with QB4OLAP concepts (e.g., full-fledged dimension hierarchies) by choosing among the candidate concepts automatically discovered with the steps proposed. Finally, we conduct experiments with 25 users and use three real-world QB data sets to evaluate our approach. The evaluation demonstrates the feasibility of our approach and shows that, in practice, our tool facilitates, speeds up, and guarantees the correct results of the enrichment process.  相似文献   

19.
张全贵  李鑫  王普 《计算机工程与设计》2012,33(6):2472-2475,2521
传统组态软件二维用户界面仿真度低,不能直观反映工业现场生产情况,针对该问题设计并实现了基于X3D的组态软件三维用户界面组态引擎.介绍了三维用户界面组态引擎的系统架构以及使用该引擎进行场景组态时的流程;基于X3D建立了三维场景对象库,并建立对象之间的关联作为场景组态的约束关系,利用其加快场景组态过程;使用XJ3D图形工具包实现了该引擎的原型系统.实验证明该引擎组态简便,使用其组态生成的三维用户界面具有较好的仿真效果.  相似文献   

20.
This paper describes concepts, design, implementation, and performance evaluation of a 3D-based user interface for accessing IoT-based Smart Environments (IoT-SE). The generic interaction model of the described work addresses some major challenges of Human-IoT-SE-Interaction such as cognitive overload associated with manual device selection in complex IoT-SE, loss of user control, missing system image or over-automation. To address these challenges we propose a 3D-based mobile interface for mixed-initiative interaction in IoT-SE. The 3D visualization and 3D UI, acting as the central feature of the system, create a logical link between physical devices and their virtual representation on the end user’s mobile devices. By so doing, the user can easily identify a device within the environment based on its position, orientation, and form, and access the identified devices through the 3D interface for direct manipulation within the scene. This overcomes the problem of manual device selection. In addition, the 3D visualization provides a system image for the IoT-SE, which supports users in understanding the ambience and things going on in it. Furthermore, the mobile interface allows users to control the amount and the way the IoT-SE automates the environment. For example, users can stop or postpone system triggered automatic actions, if they don’t like or want them. Users also can remove a rule forever. By so doing, users can delete smart behaviors of their IoT-SE. This helps to overcome the automation challenges. In this paper, we present the design, implementation and evaluation of the proposed interaction system. We chose smart meeting rooms as the context for prototyping and evaluating our interaction concepts. However, the presented concepts and methods are generic and could be adapted to similar environments such as smart homes. We conducted a subjective usability evaluation (ISO-Norm 9241/110) with 16 users. All in one the study results indicate that the proposed 3D-User Interface achieved a good high score according to the ISO-Norm scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号