首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A simple approach for the simulation of bipedal locomotion is presented. It is based on a kinematically driven rigid body dynamics simulation. As inputs, our system uses a 14-DOF simplified human body model, and the instantaneous walking direction and the gait length. From these parameters, the trajectories of the left and right feet are computed over time, using a purely kinematical recipe. Optionally, trajectories of other body parts (head, shoulders, hips, hands, knees, etc.) may be derived from a simple kinematic body model. Using these additional trajectories provides for increased control over the motion, adding extra kinematic constraints to the dynamical system. The trajectories, together with optional force fields (gravity etc.), serve to drive the motion of the articulated human body model. This means that for those points in the body that have not been specified kinematically, a rigid body dynamics calculation is used to describe their motion over time. An implementation of these ideas turns out to allow a frame update rate of about 20 Hz on a personal IRISTM workstation, which is sufficiently fast for real-time interaction.  相似文献   

2.
基于视频的实时自动人体高度测量   总被引:3,自引:0,他引:3  
怎样从视频出发准确实时地测量场景中运动人体的高度? 针对此问题, 本文提出了一种自动实时的人体高度测量方法. 该方法首先在视频序列中的每帧图像上提取一种新的头部特征点以及一种新的脚部特征点, 然后根据这些特征点建立约束方程求出近似的人体高度, 并同时在视频序列中跟踪双脚. 最后基于获得的双脚跟踪结果, 引入一条关于特征点所对应空间点的几何约束以进一步优化测量结果. 与过去的许多测量方法相比, 本文方法有效地利用了视频序列中包含的运动信息, 有较强的鲁棒性和较高的测量精度, 既能有效地处理透视镜头下的视频又能处理鱼眼镜头下的视频, 而且计算量很低, 可以实现实时测量. 实验结果验证了本文算法的有效性和实时性.  相似文献   

3.
4.
This paper presents a foreground extraction method for live-streaming videos using dual-side cameras on mobile devices. Compared to conventional methods, which estimate both foreground and background models from the front camera, the proposed method uses the rear camera to infer the reference background model. To this end, the short-term trajectory analysis is first performed to cluster point trajectories of the front camera, and then the long-term trajectory analysis is performed to compare the paths of the clustered trajectories with the reference path obtained from the rear camera. In particular, clusters having high correlation are classified as background using the Gaussian mixture model. Additionally, a pixel-wise segmentation map is obtained via graph-based segmentation. Experimental results show that the proposed method is robust under a variety of camera motion, outperforming state-of-the-art methods. Code and dataset can be found at https://github.com/YCL92/dualCamSeg.  相似文献   

5.
Research on bipedal locomotion has shown that a dynamic walking gait is energetically more efficient than a statically stable one. Analogously, even though statically stable multi-wheeled robots are easier to control, they are energetically less efficient and have low accelerations to avoid tipping over. In contrast, the ballbot is an underactuated, nonholonomically constrained mobile robot, whose upward equilibrium point has to be stabilised by active control. In this work, we derive coordinate-invariant, reduced, Euler–Poincaré equations of motion for the ballbot. By means of partial feedback linearisation, we obtain two independent passive outputs with corresponding storage functions and utilise these to come up with energy-shaping control laws which move the system along the trajectories of a new Lagrangian system whose desired equilibrium point is asymptotically stable by construction. The basin of attraction of this controller is shown to be almost global under certain conditions on the design of the mechanism which are reflected directly in the mass matrix of the unforced equations of motion.  相似文献   

6.
The multiple view geometry of static scenes is now well understood. Recently attention was turned to dynamic scenes where scene points may move while the cameras move. The triangulation of linear trajectories is now well handled. The case of quadratic trajectories also received some attention. We present a complete generalization and address the problem of general trajectory triangulation of moving points from non-synchronized cameras. Two cases are considered: (i) the motion is captured in the images by tracking the moving point itself, (ii) the tangents of the motion only are extracted from the images. The first case is based on a new representation (to computer vision) of curves (trajectories) where a curve is represented by a family of hypersurfaces in the projective space ?5. The second case is handled by considering the dual curve of the curve generated by the trajectory. In both cases these representations of curves allow: (i) the triangulation of the trajectory of a moving point from non-synchronized sequences, (ii) the recovery of more standard representation of the whole trajectory, (iii) the computations of the set of positions of the moving point at each time instant an image was made. Furthermore, theoretical considerations lead to a general theorem stipulating how many independent constraints a camera provides on the motion of the point. This number of constraint is a function of the camera motion. On the computation front, in both cases the triangulation leads to equations where the unknowns appear linearly. Therefore the problem reduces to estimate a high-dimensional parameter in presence of heteroscedastic noise. Several method are tested.  相似文献   

7.
In video post-production applications, camera motion analysis and alignment are important in order to ensure the geometric correctness and temporal consistency. In this paper, we trade some generality in estimating and aligning camera motion for reduced computational complexity and increased image-based nature. The main contribution is to use fundamental ratios to synchronize video sequences of distinct scenes captured by cameras undergoing similar motions. We also present a simple method to align 3D camera trajectories when the fundamental ratios are not able to match the noisy trajectories. Experimental results show that our method can accurately synchronize sequences even when the scenes are totally different and have dense depths. An application on 3D object transfer is also demonstrated.  相似文献   

8.
Some articulated motion representations rely on frame-wise abstractions of the statistical distribution of low-level features such as orientation, color, or relational distributions. As configuration among parts changes with articulated motion, the distribution changes, tracing a trajectory in the latent space of distributions, which we call the configuration space. These trajectories can then be used for recognition using standard techniques such as dynamic time warping. The core theory in this paper concerns embedding the frame-wise distributions, which can be looked upon as probability functions, into a low-dimensional space so that we can estimate various meaningful probabilistic distances such as the Chernoff, Bhattacharya, Matusita, Kullback-Leibler (KL) or symmetric-KL distances based on dot products between points in this space. Apart from computational advantages, this representation also affords speed-normalized matching of motion signatures. Speed normalized representations can be formed by interpolating the configuration trajectories along their arc lengths, without using any knowledge of the temporal scale variations between the sequences. We experiment with five different probabilistic distance measures and show the usefulness of the representation in three different contexts—sign recognition (with large number of possible classes), gesture recognition (with person variations), and classification of human-human interaction sequences (with segmentation problems). We find the importance of using the right distance measure for each situation. The low-dimensional embedding makes matching two to three times faster, while achieving recognition accuracies that are close to those obtained without using a low-dimensional embedding. We also empirically establish the robustness of the representation with respect to low-level parameters, embedding parameters, and temporal-scale parameters.  相似文献   

9.
The main indicator of dynamic balance is the \(\mathit{ZMP}\). Its original notion assumes that both feet of the robot are in contact with the flat horizontal surface (all contacts are in the same plane) and that the friction is high enough so that sliding does not occur. With increasing capabilities of humanoid robots and the higher complexity of the motion that needs to be performed, these assumptions might not hold. Having in mind that the system is dynamically balanced if there is no rotation about the edges of the feet and if the feet do not slide, we propose a novel approach for testing the dynamic balance of bipedal robots, by using linear contact wrench conditions compiled in a single matrix (Dynamic Balance Matrix). The proposed approach has wide applicability since it can be used to check the stability of different kinds of contacts (including point, line, and surface) with arbitrary perimeter shapes. Motion feasibility conditions are derived on the basis of the conditions which the wrench of each contact has to satisfy. The approach was tested by simulation in two scenarios: biped climbing up and walking sideways on the inclined flat surface which is too steep for a regular walk without additional support. The whole-body motion was synthesized and performed using a generalized task prioritization framework.  相似文献   

10.
This paper addresses the problem of estimating a camera motion from a non-calibrated monocular camera. Compared to existing methods that rely on restrictive assumptions, we propose a method which can estimate camera motion with much less restrictions by adopting new example-based techniques compensating the lack of information. Specifically, we estimate the focal length of the camera by referring to visually similar training images with which focal lengths are associated. For one step camera estimation, we refer to stationary points (landmark points) whose depths are estimated based on RGB-D candidates. In addition to landmark points, moving objects can be also used as an information source to estimate the camera motion. Therefore, our method simultaneously estimates the camera motion for a video, and the 3D trajectories of objects in this video by using Reversible Jump Markov Chain Monte Carlo (RJ-MCMC) particle filtering. Our method is evaluated on challenging datasets demonstrating its effectiveness and efficiency.  相似文献   

11.
This paper presents a probabilistic framework for reasoning about the safety of robot trajectories in dynamic and uncertain environments with imperfect information about the future motion of surrounding objects. For safety assessment, the overall collision probability is used to rank candidate trajectories by considering the probability of colliding with known objects as well as the estimated collision probability beyond the planning horizon. In addition, we introduce a safety assessment cost metric, the probabilistic collision cost, which considers the relative speeds and masses of multiple moving objects in which the robot may possibly collide with. The collision probabilities with other objects are estimated by probabilistic reasoning about their future motion trajectories as well as the ability of the robot to avoid them. The results are integrated into a navigation framework that generates and selects trajectories that strive to maximize safety while minimizing the time to reach a goal location. An example implementation of the proposed framework is applied to simulation scenarios, that explores some of the inherent computational trade-offs.  相似文献   

12.
Camera networks have gained increased importance in recent years. Existing approaches mostly use point correspondences between different camera views to calibrate such systems. However, it is often difficult or even impossible to establish such correspondences. But even without feature point correspondences between different camera views, if the cameras are temporally synchronized then the data from the cameras are strongly linked together by the motion correspondence: all the cameras observe the same motion. The present article therefore develops the necessary theory to use this motion correspondence for general rigid as well as planar rigid motions. Given multiple static affine cameras which observe a rigidly moving object and track feature points located on this object, what can be said about the resulting point trajectories? Are there any useful algebraic constraints hidden in the data? Is a 3D reconstruction of the scene possible even if there are no point correspondences between the different cameras? And if so, how many points are sufficient? Is there an algorithm which warrants finding the correct solution to this highly non-convex problem? This article addresses these questions and thereby introduces the concept of low-dimensional motion subspaces. The constraints provided by these motion subspaces enable an algorithm which ensures finding the correct solution to this non-convex reconstruction problem. The algorithm is based on multilinear analysis, matrix and tensor factorizations. Our new approach can handle extreme configurations, e.g. a camera in a camera network tracking only one single point. Results on synthetic as well as on real data sequences act as a proof of concept for the presented insights.  相似文献   

13.
The analysis of periodic or repetitive motions is useful in many applications, such as the recognition and classification of human and animal activities. Existing methods for the analysis of periodic motions first extract motion trajectories using spatial information and then determine if they are periodic. These approaches are mostly based on feature matching or spatial correlation, which are often infeasible, unreliable, or computationally demanding. In this paper, we present a new approach, based on the time- frequency analysis of the video sequence as a whole. Multiple periodic trajectories are extracted and their periods are estimated simultaneously. The objects that are moving in a periodic manner are extracted using the spatial domain information. Experiments with synthetic and real sequences display the capabilities of this approach.  相似文献   

14.
《Image and vision computing》2002,20(5-6):349-358
Human activities are characterised by the spatio-temporal structure of their motion patterns. Such structures can be represented as temporal trajectories in a high-dimensional feature space of closely correlated measurements of visual observations. Models of such temporal structures need to account for the probabilistic and uncertain nature of motion patterns, their non-linear temporal scaling and ambiguities in temporal segmentation. In this paper, we address such problems by introducing a statistical dynamic framework to model and recognise human activities based on learning prior and continuous propagation of density models of behaviour patterns. Prior is learned from example sequences using hidden Markov models and density models are augmented by current visual observations.  相似文献   

15.
In this paper, the problem of automated scene understanding by tracking and predicting paths for multiple humans is tackled, with a new methodology using data from a single, fixed camera monitoring the environment. Our main idea is to build goal-oriented prior motion models that could drive both the tracking and path prediction algorithms, based on a coarse-to-fine modeling of the target goal. To implement this idea, we use a dataset of training video sequences with associated ground-truth trajectories and from which we extract hierarchically a set of key locations. These key locations may correspond to exit/entrance zones in the observed scene, or to crossroads where trajectories have often abrupt changes of direction. A simple heuristic allows us to make piecewise associations of the ground-truth trajectories to the key locations, and we use these data to learn one statistical motion model per key location, based on the variations of the trajectories in the training data and on a regularizing prior over the models spatial variations. We illustrate how to use these motion priors within an interacting multiple model scheme for target tracking and path prediction, and we finally evaluate this methodology with experiments on common datasets for tracking algorithms comparison.  相似文献   

16.
In this paper it is presented a CPG approach based on phase oscillators to bipedal locomotion where the designer with little a priori knowledge is able to incrementally add basic motion primitives, reaching bipedal walking and other locomotor behaviors as final result. The proposed CPG aims to be a model free solution for the generation of bipedal walking, not requiring the use of inverse kinematical models and previously defined joint trajectories.The proposed incremental construction of bipedal walking allows an easier parametrization and performance evaluation throughout the design process. Furthermore, the approach provides for a developmental mechanism, which enables progressively building a motor repertoire. It would easily benefit from evolutionary robotics and machine learning to explore this aspect.The proposed CPG system also offers a good substrate for the inclusion of feedback mechanisms for modulation and adaptation. It is explored a phase regulation mechanism using load sensory information, observable in vertebrate legged animals.Results from simulations, on HOAP and DARwIn-OP in Webots software show the adequacy of the locomotor system to generate bipedal walk on different robots. Experiments on a DARwIn-OP demonstrates how it can accomplish locomotion and how the proposed work can generalize, achieving several distinct locomotor behaviors.  相似文献   

17.
We propose a novel approach for activity analysis in multiple synchronized but uncalibrated static camera views. In this paper, we refer to activities as motion patterns of objects, which correspond to paths in far-field scenes. We assume that the topology of cameras is unknown and quite arbitrary, the fields of views covered by these cameras may have no overlap or any amount of overlap, and objects may move on different ground planes. Using low-level cues, objects are first tracked in each camera view independently, and the positions and velocities of objects along trajectories are computed as features. Under a probabilistic model, our approach jointly learns the distribution of an activity in the feature spaces of different camera views. Then, it accomplishes the following tasks: 1) grouping trajectories, which belong to the same activity but may be in different camera views, into one cluster; 2) modeling paths commonly taken by objects across multiple camera views; and 3) detecting abnormal activities. Advantages of this approach are that it does not require first solving the challenging correspondence problem, and that learning is unsupervised. Even though correspondence is not a prerequisite, after the models of activities have been learned, they can help to solve the correspondence problem, since if two trajectories in different camera views belong to the same activity, they are likely to correspond to the same object. Our approach is evaluated on a simulated data set and two very large real data sets, which have 22,951 and 14,985 trajectories, respectively.  相似文献   

18.
19.
We introduce the concept of self-calibration of a 1D projective camera from point correspondences, and describe a method for uniquely determining the two internal parameters of a 1D camera, based on the trifocal tensor of three 1D images. The method requires the estimation of the trifocal tensor which can be achieved linearly with no approximation unlike the trifocal tensor of 2D images and solving for the roots of a cubic polynomial in one variable. Interestingly enough, we prove that a 2D camera undergoing planar motion reduces to a 1D camera. From this observation, we deduce a new method for self-calibrating a 2D camera using planar motions. Both the self-calibration method for a 1D camera and its applications for 2D camera calibration are demonstrated on real image sequences.  相似文献   

20.
Navigation and monitoring of large and crowded virtual environments is a challenging task and requires intuitive camera control techniques to assist users. In this paper, we present a novel automatic camera control technique providing a scene analysis framework based on information theory. The developed framework contains a probabilistic model of the scene to build entropy and expectancy maps. These maps are utilized to find interest points which represent either characteristic behaviors of the crowd or novel events occurring in the scene. After an interest point is chosen, the camera is updated accordingly to display this point. We tested our model in a crowd simulation environment and it performed successfully. Our method can be integrated into existent camera control modules in computer games, crowd simulations and movie pre-visualization applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号