首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton‐based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved ‐ how might these motions be mapped? We control characters with a method which avoids the rigging‐skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively‐defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real‐time animation.  相似文献   

2.
Performance has a spontaneity and “aliveness” that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance-based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances.  相似文献   

3.
Standard camera and projector calibration techniques use a checkerboard that is manually shown at different poses to determine the calibration parameters. Furthermore, when image geometric correction must be performed on a three‐dimensional (3D) surface, such as projection mapping, the surface geometry must be determined. Camera calibration and 3D surface estimation can be costly, error prone, and time‐consuming when performed manually. To address this issue, we use an auto‐calibration technique that projects a series of Gray code structured light patterns. These patterns are captured by the camera to build a dense pixel correspondence between the projector and camera, which are used to calibrate the stereo system using an objective function, which embeds the calibration parameters together with the undistorted points. Minimization is carried out by a greedy algorithm that minimizes the cost at each iteration with respect to both calibration parameters and noisy image points. We test the auto‐calibration on different scenes and show that the results closely match a manual calibration of the system. We show that this technique can be used to build a 3D model of the scene, which in turn with the dense pixel correspondence can be used for geometric screen correction on any arbitrary surface.  相似文献   

4.
This paper proposes a concept of center of gravity (COG) viscoelasticity to associate joint viscoelasticity with the inverted pendulum model of humanoid dynamics. Although COG viscoelasticity is based on the well-known kinematic relationship between joint stiffness and end-effector stiffness, it provides practical advantages for both analysis and control of humanoid motions. There are two main contributions. The first is that the COG viscoelasticity allows us to analyze fall risk. In a previous study, the author proposed a fall detection method based on the maximal output admissible (MOA) set, which is computed from feedback gain of the inverted pendulum model. The COG viscoelasticity associates joint viscoelasticity with the feedback gain and allows us to compute the corresponding MOA set when an arbitrary joint viscoelasticity is given. The second contribution is that the COG viscoelasticity can be also utilized in motion control. After we design a feedback gain in the inverted pendulum model utilizing the control theory, the COG viscoelasticity can directly transform it to the joint viscoelasticity. The validity of the COG viscoelasticity is verified with whole-body dynamics simulations.  相似文献   

5.
Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user‐clicked images. Our framework is based on a novel locally‐bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user‐clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness.  相似文献   

6.
7.
A complete system for path and route finding through arbitrary complex three-dimensional (3D) polygonal worlds is presented. The system is fully automated and can quickly map a complete digital environment without the need for human intervention. Arbitrary complex polygonal worlds can be processed within a few minutes on today’s computers. The processed information allows a robot with a limited number of degrees of freedom to efficiently navigate around obstacles and find routes through the environment. The system is especially suitable for route finding and navigation through buildings which are typically designed with the use of computers these days. As such, digital polygonal representations of buildings are readily available. Not only can the system be used for robot navigation, it can also be used to guide people through buildings. Being able to quickly compile an arbitrary building into a route and path finding system can be especially useful for firemen and tactical military units to invade buildings and reach specific locations. When a building is partly destroyed or inaccessible for other reasons, the digital representation of the building can easily be adjusted by adding or removing polygons and the system presented here can be used to quickly recalculate routes. The system has been successfully implemented and employed in the popular computer game Quake III Arena. The artificial players in this game use the system for path and route finding through complex 3D polygonal worlds. Many other computer games can also benefit from the system presented here.  相似文献   

8.
Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models  相似文献   

9.
We propose a novel learning‐based solution for motion planning of physically‐based humanoid climbing that allows for fast and robust planning of complex climbing strategies and movements, including extreme movements such as jumping. Similar to recent previous work, we combine a high‐level graph‐based path planner with low‐level sampling‐based optimization of climbing moves. We contribute through showing that neural network models of move success probability, effortfulness, and control policy can make both the high‐level and low‐level components more efficient and robust. The models can be trained through random simulation practice without any data. The models also eliminate the need for laboriously hand‐tuned heuristics for graph search. As a result, we are able to efficiently synthesize climbing sequences involving dynamic leaps and one‐hand swings, i.e. there are no limits to the movement complexity or the number of limbs allowed to move simultaneously. Our supplemental video also provides some comparisons between our AI climber and a real human climber.  相似文献   

10.
Three-dimensional detection and shape recovery of a nonrigid surface from video sequences require deformation models to effectively take advantage of potentially noisy image data. Here, we introduce an approach to creating such models for deformable 3D surfaces. We exploit the fact that the shape of an inextensible triangulated mesh can be parameterized in terms of a small subset of the angles between its facets. We use this set of angles to create a representative set of potential shapes, which we feed to a simple dimensionality reduction technique to produce low-dimensional 3D deformation models. We show that these models can be used to accurately model a wide range of deforming 3D surfaces from video sequences acquired under realistic conditions.  相似文献   

11.
Recent advances in sensing and software technologies enable us to obtain large-scale, yet fine 3D mesh models of cultural assets. However, such large models cannot be displayed interactively on consumer computers because of the performance limitation of the hardware. Cloud computing technology is a solution that can process a very large amount of information without adding to each client user’s processing cost. In this paper, we propose an interactive rendering system for large 3D mesh models, stored on a remote environment through a network of relatively small capacity machines, based on the cloud computing concept. Our system uses both model- and image-based rendering methods for efficient load balance between a server and clients. On the server, the 3D models are rendered by the model-based method using a hierarchical data structure with Level of Detail (LOD). On the client, an arbitrary view is constructed by using a novel image-based method, referred to as the Grid-Lumigraph, which blends colors from sampling images received from the server. The resulting rendering system can efficiently render any image in real time. We implemented the system and evaluated the rendering and data transferring performance.  相似文献   

12.
We propose a semi-automatic omnidirectional texturing method that maps a spherical image onto a dense 3D model obtained by a range sensor. For accurate texturing, accurate estimation of the extrinsic parameters is inevitable. In order to estimate these parameters, we propose a robust 3D registration-based method between a dense range data set and a sparse spherical image stereo data set. For measuring the distances between the two data sets, we introduce generalized distances taking account of 3D error distributions of the stereo data. To reconstruct 3D models by images, we use two spherical images taken at arbitrary positions in arbitrary poses. Then, we propose a novel rectification method for spherical images that is derived from E matrix and facilitates the estimation of the disparities. The experimental results show that the proposed method can map the spherical image onto the dense 3D models effectively and accurately.  相似文献   

13.
Statistical Learning for Humanoid Robots   总被引:7,自引:0,他引:7  
The complexity of the kinematic and dynamic structure of humanoid robots make conventional analytical approaches to control increasingly unsuitable for such systems. Learning techniques offer a possible way to aid controller design if insufficient analytical knowledge is available, and learning approaches seem mandatory when humanoid systems are supposed to become completely autonomous. While recent research in neural networks and statistical learning has focused mostly on learning from finite data sets without stringent constraints on computational efficiency, learning for humanoid robots requires a different setting, characterized by the need for real-time learning performance from an essentially infinite stream of incrementally arriving data. This paper demonstrates how even high-dimensional learning problems of this kind can successfully be dealt with by techniques from nonparametric regression and locally weighted learning. As an example, we describe the application of one of the most advanced of such algorithms, Locally Weighted Projection Regression (LWPR), to the on-line learning of three problems in humanoid motor control: the learning of inverse dynamics models for model-based control, the learning of inverse kinematics of redundant manipulators, and the learning of oculomotor reflexes. All these examples demonstrate fast, i.e., within seconds or minutes, learning convergence with highly accurate final peformance. We conclude that real-time learning for complex motor system like humanoid robots is possible with appropriately tailored algorithms, such that increasingly autonomous robots with massive learning abilities should be achievable in the near future.  相似文献   

14.
MonoSLAM: real-time single camera SLAM   总被引:4,自引:0,他引:4  
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera  相似文献   

15.
We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.  相似文献   

16.
We studied the stereoscopic effect obtained from a two‐dimensional image without using binocular parallax, which we call “natural3D” (n3D). Unlike a parallax‐based three‐dimensional (3D) display system, n3D causes less tiredness and is free from a decrease of the resolution by half because of image division and viewing position dependence. To make the display with these effects comfortable to use, we conducted statistical tests with sensory evaluation experiments and a quantitative evaluation based on physiological responses. These examinations revealed that the n3D effect can be effectively obtained by using, for example, the characteristics of an organic light‐emitting diode display, such as high contrast and easy bendability. This study discusses optimal display curvatures for displays of different sizes that enhance n3D and reduce tiredness, which are revealed through statistical tests. In addition, we performed an experiment with a frame called an n3D window (n3Dw) that is placed before the display such that a subject views the display through the opening of the frame. We found that the combination of a curve and the n3Dw causes n3D more effectively.  相似文献   

17.
Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model, and enabling any sculpting gesture to be interpreted as some adapted constraint‐preserving deformation. A variety of object‐specific constraints can be enforced using this framework, such as maintaining distributions of subparts, avoiding self‐penetrations or meeting semantic‐based user‐defined rules. The operations used to maintain constraints are kept transparent to the user, enabling them to focus on their design. We demonstrate the feasibility and the versatility of this approach on a variety of examples, implemented within an interactive sculpting system.  相似文献   

18.
We present a sparse optimization framework for extracting sparse shape priors from a collection of 3D models. Shape priors are defined as point‐set neighborhoods sampled from shape surfaces which convey important information encompassing normals and local shape characterization. A 3D shape model can be considered to be formed with a set of 3D local shape priors, while most of them are likely to have similar geometry. Our key observation is that the local priors extracted from a family of 3D shapes lie in a very low‐dimensional manifold. Consequently, a compact and informative subset of priors can be learned to efficiently encode all shapes of the same family. A comprehensive library of local shape priors is first built with the given collection of 3D models of the same family. We then formulate a global, sparse optimization problem which enforces selecting representative priors while minimizing the reconstruction error. To solve the optimization problem, we design an efficient solver based on the Augmented Lagrangian Multipliers method (ALM). Extensive experiments exhibit the power of our data‐driven sparse priors in elegantly solving several high‐level shape analysis applications and geometry processing tasks, such as shape retrieval, style analysis and symmetry detection.  相似文献   

19.
In the digital world, assigning arbitrary colors to an object is a simple operation thanks to texture mapping. However, in the real world, the same basic function of applying colors onto an object is far from trivial. One can specify colors during the fabrication process using a color 3D printer, but this does not apply to already existing objects. Paint and decals can be used during post‐fabrication, but they are challenging to apply on complex shapes. In this paper, we develop a method to enable texture mapping of physical objects, that is, we allow one to map an arbitrary color image onto a three‐dimensional object. Our approach builds upon hydrographics, a technique to transfer pigments printed on a sheet of polymer onto curved surfaces. We first describe a setup that makes the traditional water transfer printing process more accurate and consistent across prints. We then simulate the transfer process using a specialized parameterization to estimate the mapping between the planar color map and the object surface. We demonstrate that our approach enables the application of detailed color maps onto complex shapes such as 3D models of faces and anatomical casts.  相似文献   

20.
We present an interface for 3D object manipulation in which standard transformation tools are replaced with transient 3D widgets invoked by sketching context‐dependent strokes. The widgets are automatically aligned to axes and planes determined by the user's stroke. Sketched pivot‐points further expand the interaction vocabulary. Using gestural commands, these basic elements can be assembled into dynamic, user‐constructed 3D transformation systems. We supplement precise widget interaction with techniques for coarse object positioning and snapping. Our approach, which is implemented within a broader sketch‐based modeling system, also integrates an underlying “widget history” to enable the fluid transfer of widgets between objects. An evaluation indicates that users familiar with 3D manipulation concepts can be taught how to efficiently use our system in under an hour.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号