首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
设计了一种基于微机电惯性传感器的数据手套.根据惯性导航和刚体动力学原理,构建了基于微型传感器的体感网络,并通过多传感器数据的融合解算以获取运动姿态信息,实现对手指各关节运动数据的捕获.结合计算机图形技术构建三维虚拟手捕获系统进行性能对比评估,实验结果表明:所设计的系统具有良好的稳定性和适应性,能够对手运动信息进行实时有效地捕获.  相似文献   

2.
陈鹏展  李杰  罗漫 《计算机应用》2015,35(8):2316-2320
针对目前基于惯性传感的动作捕捉系统存在的姿态漂移、实时性不强和价格较高的问题,设计了一种低功耗、低成本,能够有效克服姿态数据漂移的人体实时动作捕捉系统。首先通过人体运动学原理,构建分布式关节运动捕捉节点,各捕捉节点采用低功耗模式,当节点采集数据低于预定阈值时,自动进入休眠模式,降低系统功耗;结合惯性导航和Kalman滤波算法对人体运动姿态进行实时的解算,以降低传统的算法存在的数据漂移问题;基于Wi-Fi模块,采用TCP-IP协议对姿态数据进行转发,实现对模型的实时驱动。选取多轴电机测试平台对算法的精度进行了评估,并对比了系统对真实人体的跟踪效果。实验结果表明,改进算法与传统的互补滤波算法相比具有更高的精度,基本能将角度漂移控制在1°以内;且算法的时延相对于互补滤波没有明显的滞后,基本能够实现对人体运动的准确跟踪。  相似文献   

3.
Motion study of the hip joint in extreme postures   总被引:2,自引:0,他引:2  
Many causes can be at the origin of hip osteoarthritis (e.g., cam/pincer impingements), but the exact pathogenesis for idiopathic osteoarthritis has not yet been clearly delineated. The aim of the present work is to analyze the consequences of repetitive extreme hip motion on the labrum cartilage. Our hypothesis is that extreme movements can induce excessive labral deformations and lead to early arthritis. To verify this hypothesis, an optical motion capture system is used to estimate the kinematics of patient-specific hip joint, while soft tissue artifacts are reduced with an effective correction method. Subsequently, a physical simulation system is used during motion to compute accurate labral deformations and to assess the global pressure of the labrum, as well as any local pressure excess that may be physiologically damageable. Results show that peak contact pressures occur at extreme hip flexion/abduction and that the pressure distribution corresponds with radiologically observed damage zones in the labrum.
Nadia Magnenat-ThalmannEmail:
  相似文献   

4.
Repetitive workplace tasks are associated with fatigue-induced changes to shoulder muscular strategies, potentially altering kinematics and elevating susceptibility to tissue overexposures. Accessible and reliable methods to detect shoulder muscle fatigue in the workplace are therefore valuable. Detectable changes in joint motion may provide a plausible fatigue identification method. In this investigation, the onset of the first kinematic changes, as identified by a symbolic motion representation (SMSR) algorithm, and the onset of substantial surface electromyography (sEMG) mean power frequency (MPF) fatigue were not significantly different, both occurring around 10% of task duration. This highlights the potential utility of SMSR identified directional changes in joint motion during repetitive tasks as a cue of substantial muscle fatigue, enabling ergonomics responses that can mitigate shoulder muscular fatigue accumulation and its associated deleterious physical effects.

Practitioner Summary: The onset of substantial muscle fatigue during a repetitive dynamic task was assessed using kinematics and myoelectric-based techniques. Algorithmically detectable directional changes in upper extremity joint motion occurred with the onset of substantial muscle fatigue, highlighting the potential of this as a useful approach for workplace fatigue identification.  相似文献   


5.
遥操作护理机器人系统的操作者姿态解算方法研究   总被引:1,自引:0,他引:1  
左国玉  于双悦  龚道雄 《自动化学报》2016,42(12):1839-1848
设计了一种遥操作护理机器人系统,为实现从端同构式机器人的随动运动控制,对主端操作者人体姿态解算方法进行了研究.首先,构建由惯性传感单元构成的动作捕捉系统,对用作从端机器人动作指令的操作者人体姿态信息进行采集,采用四元数法对人体运动原始数据进行初步求解.其次,将四元数法得到的姿态数据解算成依据仿人结构设计的护理机器人各关节运动的目标姿态角,实现人体姿态到机器人动作的同构性映射.最后,为验证本文所提姿态解算方法的性能,设计了操作者控制护理机器人完成递送和拿取药瓶动作的实验.结果表明,本文姿态解算方法的解算性能与参考系统基本相同;在操作者动作姿态快速变化的时间段,系统仍可获得较高精度的目标姿态数据,其误差在动态条件下依旧能保持在2%以下;护理机器人可较好地实时复现操作者的人体动作.本文方法能满足机器人进行一般护理作业时对人体姿态数据处理的快速性和准确性要求.  相似文献   

6.
This paper describes a walking pattern generation algorithm for a robotic transfemoral prosthesis that is synchronized with the walking motion of a transfemoral amputee, and posture stabilization for ground adaptation and maintaining balance on inclined grounds. The developed robotic transfemoral prosthesis in this study has a knee joint and ankle roll/pitch joints for walking on complex slopes. The walking motion data obtained from the motion capture system are used as the standard walking pattern data to accurately imitate the inherent gait of the wearer. Walking intention, percent of gait cycle (PGC), and walking stride are predicted through two inertial sensors attached at both thighs, and the joint angles of the robotic transfemoral prosthesis are then generated in real-time from the PGC and the standard walking pattern data. Additionally, variable impedance control and zero moment point (ZMP) control are carried out with a force/torque sensor for posture stabilization against variable ground slopes, and ground slope compensation and disturbance rejection are also done with the use of inertial sensors at the foot and shank. Consequently, the performance of the walking pattern generation algorithm and posture stabilization control was verified through walking experiments of an author on an inclined treadmill.  相似文献   

7.
基于动作单元分析的人体动画合成方法研究   总被引:4,自引:0,他引:4  
从运动捕获数据中提取出反映人体运动规律的基本动作单元,合成新的人体动画已成为研究热点.但已有动作单元提取方法忽略了运动序列的时序性和不同关节之间的运动相关性.针对该问题,提出了一种新的基本动作单元提取方法,首先,采用PCA方法对高维人体运动数据进行降维分析,并采用马氏距离平方度量姿态间的相似性;其次,结合动态时间归整方法和误差平方和准则对时序运动序列进行自动切分和标注;最后,建立不同动作单元之间的概率转移模型构建运动图,并根据约束条件合成新的逼真人体动画.  相似文献   

8.
Motion capture is mainly based on standard systems using optic, magnetic or sonic technologies. In this paper, the possibility to detect useful human motion based on new techniques using different types of body‐fixed sensors is shown. In particular, a combination of accelerometers and angular rate sensors (gyroscopes) showed a promising design for a hybrid kinematic sensor measuring the 2D kinematics of a body segment. These sensors together with a portable datalogger, and using simple biomechanical models, allow capture of outdoor and long‐term movements and overcome some limitations of the standard motion capture systems. Significant parameters of body motion, such as nature of motion (postural transitions, trunk rotation, sitting, standing, lying, walking, jumping) and its spatio‐temporal features (velocity, displacement, angular rotation, cadence and duration) have been evaluated and compared to the camera‐based system. Based on these parameters, the paper outlines the possibility to monitor physical activity and to perform gait analysis in the daily environment, and reviews several clinical investigations related to fall risk in the elderly, quality of life, orthopaedic outcome and sport performance. Taking advantage of all the potential of these body‐fixed sensors should be promising for motion capture and particularly in environments not suitable for standard technology such as in any field activity. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
This paper describes the analysis and design of an assistive device for elderly people under development at the EgyptJapan University of Science and Technology(E-JUST) named E-JUST assistive device(EJAD).Several experiments were carried out using a motion capture system(VICON) and inertial sensors to identify the human posture during the sit-to-stand motion.The EJAD uses only two inertial measurement units(IMUs) fused through an adaptive neuro-fuzzy inference systems(ANFIS) algorithm to imitate the real motion of the caregiver.The EJAD consists of two main parts,a robot arm and an active walker.The robot arm is a 2-degree-of-freedom(2-DOF) planar manipulator.In addition,a back support with a passive joint is used to support the patient s back.The IMUs on the leg and trunk of the patient are used to compensate for and adapt to the EJAD system motion depending on the obtained patient posture.The ANFIS algorithm is used to train the fuzzy system that converts the IMUs signals to the right posture of the patient.A control scheme is proposed to control the system motion based on practical measurements taken from the experiments.A computer simulation showed a relatively good performance of the EJAD in assisting the patient.  相似文献   

10.
V. Ortenzi  R. Stolkin  J. Kuo  M. Mistry 《Advanced Robotics》2017,31(19-20):1102-1113
Abstract

This paper reviews hybrid motion/force control, a control scheme which enables robots to perform tasks involving both motion, in the free space, and interactive force, at the contacts. Motivated by the large amount of literature on this topic, we facilitate comparison and elucidate the key differences among different approaches. An emphasis is placed on the study of the decoupling of motion control and force control. And we conclude that it is indeed possible to achieve a complete decoupling; however, this feature can be relaxed or sacrificed to reduce the robot’s joint torques while still completing the task.  相似文献   

11.
随着人机交互技术的发展,人与计算机之间自然、多模态交互将成为人与计算机 之间交互的主要方式,而这首先需要计算机可以正确地理解和捕捉人的行为特征,运动捕获技 术正是在这种背景下提出来。通过运动捕获技术,计算机可以理解人体动作,用户就可以通过 体态、方位、手势和表情等模态向计算机发出指令、传达信息等,因此运动捕获是新一代人机 交互中的关键技术之一。目前基于MEMS 惯性传感器的动作捕获研究主要针对动漫和电影制 作,价格昂贵;随着传感器的集成度变高,价格逐渐降低带来了新的系统设计需求。在原有动 作捕获技术基础上,设计并实现一种普适性更好的人体动作捕获系统原型,原型系统实现了基 于惯性传感节点的人体运动信息的采集与融合、节点与汇聚节点的数据传输、虚拟人体模型实 时动作呈现程序;实现了从数据测量到采集,再到模拟呈现的全过程。  相似文献   

12.
We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or GoPro) cameras.  相似文献   

13.
从运动捕获数据中提取关键帧   总被引:16,自引:5,他引:16  
在四元数表示下导出了两个旋转之间差异的一种简单形式,将人体各关节上总的旋转变化作为帧间距,设计了一种从运动捕获数据中提取关键帧的高效算法,通过向量的线性插值和四元数的球面线性插值重建了原始动画。实验表明,该算法具有很好的数据压缩效果,而且提取出的关键帧在视觉上对原始动画具有概括力。  相似文献   

14.
介绍了一种不需要人工干预的运动融合方法,提出了基于关节运动规律的运动周期判别方法。通过分析运动捕获数据,计算双膝与臀部节点连线夹角的变化来确定运动周期,然后再进行时空变形、插值与约束重建,从而生成高质量的运动融合动画。实验结果表明本算法能够准确的计算出运动周期,并且使约束后的融合动作更加真实。  相似文献   

15.
多相机视觉运动捕捉系统能通过捕捉标记点的空间坐标来获得运动物体的运动学参数,文中提出了一种基于多相机运动捕捉系统下的通用物体运动数据捕捉方法;首先根据3个标记点组成固定模型获取物体运动过程中对应标记点的瞬时坐标,然后通过向量法求解出被测物体在运动过程中各采集点对应的物体位姿,然后通过卡尔曼滤波方法消除运动捕捉过程中的系统和环境误差的影响,获得平滑的物体位姿运动轨迹,并根据滤波数据计算出物体在各采集点对应的速度、加速度、角速度、角加速度;最后基于协作机器人进行物体的运动数据捕捉实验,验证了所提出物体运动数据捕捉方法的正确性。  相似文献   

16.
Motion capture is an increasingly popular animation technique; however data acquired by motion capture can become substantial. This makes it difficult to use motion capture data in a number of applications, such as motion editing, motion understanding, automatic motion summarization, motion thumbnail generation, or motion database search and retrieval. To overcome this limitation, we propose an automatic approach to extract keyframes from a motion capture sequence. We treat the input sequence as motion curves, and obtain the most salient parts of these curves using a new proposed metric, called 'motion saliency'. We select the curves to be analysed by a dimension reduction technique, Principal Component Analysis (PCA). We then apply frame reduction techniques to extract the most important frames as keyframes of the motion. With this approach, around 8% of the frames are selected to be keyframes for motion capture sequences. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
This paper presents a novel recurrent neural network-based method to construct a latent motion manifold that can represent a wide range of human motions in a long sequence. We introduce several new components to increase the spatial and temporal coverage in motion space while retaining the details of motion capture data. These include new regularization terms for the motion manifold, combination of two complementary decoders for predicting joint rotations and joint velocities and the addition of the forward kinematics layer to consider both joint rotation and position errors. In addition, we propose a set of loss terms that improve the overall quality of the motion manifold from various aspects, such as the capability of reconstructing not only the motion but also the latent manifold vector, and the naturalness of the motion through adversarial loss. These components contribute to creating compact and versatile motion manifold that allows for creating new motions by performing random sampling and algebraic operations, such as interpolation and analogy, in the latent motion manifold.  相似文献   

18.
《Ergonomics》2012,55(8):1043-1045
Abstract

A widely used risk prediction tool, the revised NIOSH lifting equation (RNLE), provides the recommended weight limit (RWL), but is limited by analyst subjectivity, experience, and resources. This paper describes a robust, non-intrusive, straightforward approach to automatically extract spatial and temporal factors necessary for the RNLE using a single video camera in the sagittal plane. The participant’s silhouette is segmented by motion information and the novel use of a ghosting effect provides accurate detection of lifting instances, and hand and feet location prediction. Laboratory tests using 6 participants, each performing 36 lifts, showed that a nominal 640 pixel × 480 pixel 2D video, in comparison to 3D motion capture, provided RWL estimations within 0.2?kg (SD?=?1.0?kg). The linear regression between the video and 3D tracking RWL was R2 = 0.96 (slope = 1.0, intercept = 0.2?kg). Since low definition video was used in order to synchronise with motion capture, better performance is anticipated using high definition video.

Practitioner's summary: An algorithm for automatically calculating the revised NIOSH lifting equation using a single video camera was evaluated in comparison to laboratory 3D motion capture. The results indicate that this method has suitable accuracy for practical use and may be, particularly, useful when multiple lifts are evaluated.

Abbreviations: 2D: Two-dimensional; 3D: Three-dimensional; ACGIH: American Conference of Governmental Industrial Hygienists; AM: asymmetric multiplier; BOL: beginning of lift; CM: coupling multiplier; DM: distance multiplier; EOL: end of lift; FIRWL: frequency independent recommended weight limit; FM: frequency multiplier; H: horizontal distance; HM: horizontal multiplier; IMU: inertial measurement unit; ISO: International Organization for Standardization; LC: load constant; NIOSH: National Institute for Occupational Safety and Health; RGB: red, green, blue; RGB-D: red, green, blue – depth; RNLE: revised NIOSH lifting equation; RWL: recommended weight limit; SD: standard deviation; TLV: threshold limit value; VM: vertical multiplier; V: vertical distance  相似文献   

19.
Virtual mannequins need to navigate in order to interact with their environment. Their autonomy to accomplish navigation tasks is ensured by locomotion controllers. Control inputs can be user‐defined or automatically computed to achieve high‐level operations (e.g. obstacle avoidance). This paper presents a locomotion controller based on a motion capture edition technique. Controller inputs are the instantaneous linear and angular velocities of the walk. Our solution works in real time and supports at any time continuous changes of inputs. The controller combines three main components to synthesize locomotion animations in a four‐stage process. First, the Motion Library stores motion capture samples. Motion captures are analysed to compute quantitative characteristics. Second, these characteristics are represented in a linear control space. This geometric representation is appropriate for selecting and weighting three motion samples with respect to the input state. Third, locomotion cycles are synthesized by blending the selected motion samples. Blending is done in the frequency domain. Lastly, successive postures are extracted from the synthesized cycles in order to complete the animation of the moving mannequin. The method is demonstrated in this paper in a locomotion‐planning context. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
This study aimed to develop and deploy a novel motion capture system capable of measuring space suit kinematics in an underwater test environment. The system was built using off-the-shelf, dive-rated hardware and open source software tools. The new system performance was validated by comparing the measurement outcome to a reference motion capture system in a dry-land condition. Measurement errors, defined as the linear distances of the marker position measurements between the developed and reference system, were 1.9 cm root-mean-square error (RMSE) with a 50-percentile error of 1.3 cm and a 95-percentile error of 3.6 cm. Measurement error tended to increase with motion speed. Similarly, the error showed a slight tendency of increasing with the distance from the center of the calibrated capture volume. However, the trend was not clearly identifiable. A second metric of system accuracy performance was calculated by assessing the wand length. The system was deployed underwater and tested for space suit kinematic assessments. Given the speed and range of space suit motions underwater, the measurement error of the developed system underwater was estimated to be approximately 1.39 cm and the wand length estimation error had a RMSE of 0.67 cm with a 50-percentile error of 0.51 cm and a 95-percentile error of 0.92 cm. Overall, the new system showed reliable and acceptably accurate kinematic measurements comparable to a common dry land motion capture system and can provide usable suit performance metrics in a simulated microgravity environment.Relevance to industryAn underwater motion capture system was developed using off-the-shelf equipment. The new system was deployed to assess the kinematic mobility of space suits. The system can offer an inexpensive solution where traditional motion capture system may not be applicable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号