首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
基于骨骼的三维虚拟人运动合成方法研究   总被引:1,自引:0,他引:1  
针对虚拟人运动合成中建立的人体模型存在复杂化、合成的虚拟人运动序列逼真度差的问题,提出了一种基于骨骼的虚拟人运动合成方法。在分析人体结构的基础上,通过三维图形软件获取人体骨骼数据,构建虚拟人体的骨骼模型。另外,将关键帧四元数球面插值算法与时间和空间变形方法相结合,生成多样化的虚拟人运动序列。实验结果验证了该方法的有效性。  相似文献   

2.
Mastering a skilled motion usually requires a step-by-step progression through multiple learning phases with different subgoals. It is not easy for a learner to properly organize such a complex learning process without assistance. Hence, this task is often facilitated interactively by a human instructor through verbal advice. In many cases, the instructor's teaching strategy in relation to decomposing the entire learning process into phases, setting a subgoal for each learning phase, choosing verbal advice to guide the learner toward this subgoal, etc. remains intuitive and has not yet been formally understood. Thus, taking the basic motion of wok handling as an example, this paper presents several concrete teaching processes involving an advice sequence and the corresponding changes in the motion performance in a feature variable space. Thereby, the paper analyzes and represents the actual strategy taken in an easy-to-interpret form. As a result, it confirms that the instructor determines the set of advice elements to be given based, not simply on the observable characteristics of the latest motion performance, but more adaptively upon the interaction history with the learner.  相似文献   

3.
This paper presents the design and performance of a body-machine-interface (BoMI) system, where a user controls a robotic 3D virtual wheelchair with the signals derived from his/her shoulder and elbow movements. BoMI promotes the perspective that system users should no longer be operators of the engineering design but should be an embedded part of the functional design. This BoMI system has real-time controllability of robotic devices based on user-specific dynamic body response signatures in high-density 52-channel sensor shirt. The BoMI system not only gives access to the user’s body signals, but also translates these signals from user’s body to the virtual reality device-control space. We have explored the efficiency of this BoMI system in a semi-cylinderic 3D virtual reality system. Experimental studies are conducted to demonstrate, how this transformation of human body signals of multiple degrees of freedom, controls a robotic wheelchair navigation task in a 3D virtual reality environment. We have also presented how machine learning can enhance the interface to adapt towards the degree of freedoms of human body by correcting the errors performed by the user.  相似文献   

4.
In this paper, we propose a framework to facilitate handheld device-based PointMe interaction with annotated media content, where the user points his/her handheld device to the media presentation screen in order to access services/information from the annotated media. Each annotation is encoded into customized learning object metadata (LOM) format that provides access point for available information about a specific media on the screen. By incorporating a fusion of infra-red (IR) motion points and accelerometer data of the handheld device, the system determines the learner’s PointMe location on the media screen. By using the pointed location spatial geometrical query is performed over the displayed annotated media in order to deliver learner’s level-specific interactive learning content to the user’s handheld device. The level customization is achieved by incorporating learner’s age, progress, etc., parameters in the media content delivery process. We design intuitive learning scenarios by leveraging the PointMe interaction scheme, where learners are challenged to answer quiz questions in learning-based games. This real-world interaction technique with the annotated media and seamless virtual learning information acquisition process makes the system transparent to young learners and encourage them to engage and concentrate on their learning activities. We develop a prototype and perform experiments in a technology-augmented learning space to show the suitability of the proposed approach.  相似文献   

5.
The recent increase in technological maturity has empowered robots to assist humans and provide daily services. Voice command usually appears as a popular human–machine interface for communication. Unfortunately, deaf people cannot exchange information from robots through vocal modalities. To interact with deaf people effectively and intuitively, it is desired that robots, especially humanoids, have manual communication skills, such as performing sign languages. Without ad hoc programming to generate a particular sign language motion, we present an imitation system to teach the humanoid robot performing sign languages by directly replicating observed demonstration. The system symbolically encodes the information of human hand–arm motion from low-cost depth sensors as a skeleton motion time-series that serves to generate initial robot movement by means of perception-to-action mapping. To tackle the body correspondence problem, the virtual impedance control approach is adopted to smoothly follow the initial movement, while preventing potential risks due to the difference in the physical properties between the human and the robot, such as joint limit and self-collision. In addition, the integration of the leg-joints stabilizer provides better balance of the whole robot. Finally, our developed humanoid robot, NINO, successfully learned by imitation from human demonstration to introduce itself using Taiwanese Sign Language.  相似文献   

6.
It has been observed that human limb motions are not very accurate, leading to the hypothesis that the human motor control system may have simplified motion commands at the expense of motion accuracy. Inspired by this hypothesis, we propose learning schemes that trade motion accuracy for motion command simplification. When the original complex motion commands capable of tracking motion accurately are reduced to simple forms, the simplified motion commands can then be stored and manipulated by using learning mechanisms with simple structures and scanty memory resources, and they can be executed quickly and smoothly. We also propose learning schemes that can perform motion command scaling, so that simplified motion commands can be provided for a number of similar motions of different movement distances and velocities without recalculating system dynamics. Simulations based on human motions are reported that demonstrate the effectiveness of the proposed learning schemes in implementing this accuracy-simplification tradeoff.  相似文献   

7.
Actions performed by a virtual character can be controlled with verbal commands such as ‘walk five steps forward’. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as ‘aggressive walking’. In this paper, we present a method for controlling motion style with relative commands such as ‘do the same, but more sadly’. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.  相似文献   

8.
It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton‐based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved ‐ how might these motions be mapped? We control characters with a method which avoids the rigging‐skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively‐defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real‐time animation.  相似文献   

9.
In this paper, we propose an approach to synthesize new dance routines by combining body part motions from a human motion database. The proposed approach aims to provide a movement source to allow robots or animation characters to perform improvised dances to music, and also to inspire choreographers with the provided movements. Based on the observation that some body parts perform more appropriately than other body parts during dance performances, a correlation analysis of music and motion is conducted to identify the expressive body parts. We then combine the body part movement sources to create a new motion, which differs from all sources in the database. The generated performances are evaluated by a user questionnaire assessment, and the results are discussed to understand what is important in generating more appealing dance routines.  相似文献   

10.
本文介绍了一个基于多轴运动控制卡的运动控制系统。该系统以工控计算机、通用操作系统、PCI-8134多轴运动控制卡及其功能库函数为平台,采用VC++开发的人机界面,实现了三轴(X,Y,Z轴)独立运动、各个轴的连续直线运动以及梯形加减速运动等功能。  相似文献   

11.
Visual analysis of human motion is currently one of the most active research topics in computer vision. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Human motion analysis concerns the detection, tracking and recognition of people, and more generally, the understanding of human behaviors, from image sequences involving humans. This paper provides a comprehensive survey of research on computer-vision-based human motion analysis. The emphasis is on three major issues involved in a general human motion analysis system, namely human detection, tracking and activity understanding. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are discussed.  相似文献   

12.
In this paper, a novel framework which enables humanoid robots to learn new skills from demonstration is proposed. The proposed framework makes use of real-time human motion imitation module as a demonstration interface for providing the desired motion to the learning module in an efficient and user-friendly way. This interface overcomes many problems of the currently used interfaces like direct motion recording, kinesthetic teaching, and immersive teleoperation. This method gives the human demonstrator the ability to control almost all body parts of the humanoid robot in real time (including hand shape and orientation which are essential to perform object grasping). The humanoid robot is controlled remotely and without using any sophisticated haptic devices, where it depends only on an inexpensive Kinect sensor and two additional force sensors. To the best of our knowledge, this is the first time for Kinect sensor to be used in estimating hand shape and orientation for object grasping within the field of real-time human motion imitation. Then, the observed motions are projected onto a latent space using Gaussian process latent variable model to extract the relevant features. These relevant features are then used to train regression models through the variational heteroscedastic Gaussian process regression algorithm which is proved to be a very accurate and very fast regression algorithm. Our proposed framework is validated using different activities concerned with both human upper and lower body parts and object grasping also.  相似文献   

13.
This research is situated within the context of the creation of human learning environments using virtual reality. We propose the integration of a generic and adaptable intelligent tutoring system (Pegase). The aim is to instruct the learner, and to assist the instructor. The multi-agent system emits a set of knowledge (actions carried out by the learner, knowledge of the field, etc.) used by an artificial intelligence to make pedagogical decisions. Our study focuses on the representation of knowledge about the environment, and on the adaptable pedagogical agent providing instructive assistance.  相似文献   

14.
Advances in wireless networking, mobile broadband Internet access technology as well as the rapid development of ubiquitous computing means e-learning is no longer limited to certain settings. A ubiquitous learning (u-learning) system must however not only provide the learner with learning resources at any time and any place. However, it must also actively provide the learner with the appropriate learning assistance for their context to help him or her complete their e-learning activity. In the traditional e-learning environment, the lack of immediate learning assistance, the limitations of the screen interface or inconvenient operation means the learner is unable to receive learning resources in a timely manner and incorporate them based on the actual context into the learner’s learning activities. The result is impaired learning efficiency. Though developments in technology have overcome the constraints on learning space, an inability to appropriately exploit the technology may make it an obstacle to learning instead. When integrating the relevant information technology to develop a u-learning environment, it is therefore necessary to consider the personalization requirements of the learner to ensure that the technology achieves its intended result. This study therefore sought to apply context aware technology and recommendation algorithms to develop a u-learning system to help lifelong learning learners realize personalized learning goals in a context aware manner and improve the learner’s learning effectiveness.  相似文献   

15.
The natural user interface (NUI) has been investigated in a variety of fields in application software. This paper proposes an approach to generate virtual agents that can support users for NUI-based applications through human–robot interaction (HRI) learning in a virtual environment. Conventional human–robot interaction (HRI) learning is carried out by repeating processes that are time-consuming, complicated and dangerous because of certain features of robots. Therefore, a method is needed to train virtual agents that interact with virtual humans imitating human movements in a virtual environment. Then the result of this virtual agent can be applied to NUI-based interactive applications after the interaction learning is completed. The proposed method was applied to a model of a typical house in virtual environment with virtual human performing daily-life activities such as washing, eating, and watching TV. The results show that the virtual agent can predict a human’s intent, identify actions that are helpful to the human, and can provide services 16 % faster than a virtual agent trained using traditional Q-learning.  相似文献   

16.
《Advanced Robotics》2013,27(13):1503-1520
This paper presents a new framework to synthesize humanoid behavior by learning and imitating the behavior of an articulated body using motion capture. The video-based motion capturing method has been developed mainly for analysis of human movement, but is very rarely used to teach or imitate the behavior of an articulated body to a virtual agent in an on-line manner. Using our proposed applications, new behaviors of one agent can be simultaneously analyzed and used to train or imitate another with a novel visual learning methodology. In the on-line learning phase, we propose a new way of synthesizing humanoid behavior based on on-line learning of principal component analysis (PCA) bases of the behavior. Although there are many existing studies which utilize PCA for object/behavior representation, this paper introduces two criteria to determine if the dimension of the subspace is to be expanded or not and applies a Fisher criterion to synthesize new behaviors. The proposed methodology is well-matched to both behavioral training and synthesis, since it is automatically carried out as an on-line long-term learning of humanoid behaviors without the overhead of an expanding learning space. The final outcome of these methodologies is to synthesize multiple humanoid behaviors for the generation of arbitrary behaviors. The experimental results using a humanoid figure and a virtual robot demonstrate the feasibility and merits of this method.  相似文献   

17.
This paper presents a model of (en)action from a conceptual and theoretical point of view. This model is used to provide solid bases to overcome the complexity of designing virtual environments for learning (VEL). It provides a common grounding for trans-disciplinary collaborations where embodiment can be perceived as the cornerstone of the project. Where virtual environments are concerned, both computer scientists and educationalists have to deal with the learner/user’s body; therefore the model provides tools with which to approach both human actions and learning processes within a threefold model. It is mainly based on neuroscientific research, including enaction and the neurophysiology of action.  相似文献   

18.
The seemingly simple everyday actions of moving limb and body to accomplish a motor task or interact with the environment are incredibly complex. To reach for a target we first need to sense the target’s position with respect to an external coordinate system; we then need to plan a limb trajectory which is executed by issuing an appropriate series of neural commands to the muscles. These, in turn, exert appropriate forces and torques on the joints leading to the desired movement of the arm. Here we review some of the earlier work as well as more recent studies on the control of human movement, focusing on behavioral and modeling studies dealing with task space and joint-space movement planning. At the task level, we describe studies investigating trajectory planning and inverse kinematics problems during point-to-point reaching movements as well as two-dimensional (2D) and three-dimensional (3D) drawing movements. We discuss models dealing with the two-thirds power law, particularly differential geometrical approaches dealing with the relation between path geometry and movement velocity. We also discuss optimization principles such as the minimum-jerk model and the isochrony principle for point-to-point and curved movements.We next deal with joint-space movement planning and generation, discussing the inverse kinematics problem and common solutions to the problems of kinematic redundancy. We address the question of which reference frames are used by the nervous system and review studies examining the employment of kinematic constraints such as Donders’ and Listing’s laws. We also discuss optimization approaches based on Riemannian geometry.One principle of motor coordination during human locomotion emerging from this body of work is the intersegmental law of coordination. However, the nature of the coordinate systems underlying motion planning remains of interest as they are related to the principles governing the control of human arm movements.  相似文献   

19.
针对疫情常态化背景下,传统体育项目受场地、器材等限制,市场上相关产品价格昂贵、可扩展性不足等问题,提出了一种基于实时视频感知的虚拟体育交互系统.该系统设计视频数据采集模块和人体关节点提取模块,结合OpenPose获取人体的关节点坐标,实时捕捉人体手势以及肢体动作.动作语义理解模块包括运动动作理解和绘图动作理解.前者根据运动中肢体关节点的相对位置关系,识别运动动作语义.后者将手腕部关节点绘图动作轨迹生成为草图图像,使用AlexNet进行识别分类,解析为对应的绘制动作语义.该模型在边缘端设备的分类准确率为98.83%.采用基于Unity设计的草图游戏应用作为可视化交互界面,实现在虚拟场景中的运动交互.该系统使用实时视频感知交互方式实现居家运动健身,无需其他的外部设备,具有更强的参与度和趣味性.  相似文献   

20.
This paper focuses on multiplayer cooperative interaction in a shared haptic environment based on a local area network. Decoupled motion control, which allows one user to manipulate a haptic interface to control only one‐dimensional movement of an avatar, is presented as a new type haptic‐based cooperation among multiple users. Users respectively move an avatar along one coordinate axis so that the motion of the avatar is the synthesis of movements along all axes. It is different from previous haptic cooperation where all users can apply forces on an avatar along any direction to move it, the motion of which completely depends on the resultant force. A novel concept of movement feedback is put forward where one user can sense other users’ hand motions through his or her own haptic interface. The concept can also be explained wherein one person who is required to move a virtual object along only one axis can also feel the motions of the virtual object along other axes. Movement feedback, which is a feeling of motion, differs from force feedback, such as gravity, collision force and resistance. A spring‐damper force model is proposed for the computation of motion feedback to implement movement transmission among users through haptic devices. Experimental results validate that movement feedback is beneficial for performance enhancement of such kind of haptic‐based cooperation, and the effect of movement feedback in performance improvement is also evaluated by all subjects.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号