排序方式: 共有3条查询结果,搜索用时 0 毫秒
1
1.
J Usabiaga R Crespo I Iza J Aramendi N Terrados JJ Poza 《Canadian Metallurgical Quarterly》1997,22(17):1965-1969
STUDY DESIGN: A radiologic and electromyographic study was done of the adaptation of the lumbar spine to high-performance cycling. OBJECTIVES: To evaluate changes in the lumbar spine produced by different cycling positions on different types of bicycles used during competition. METHODS: Three professional cyclists were observed to evaluate changes in the lumbar spine. Radiographs were obtained of the different positions adopted by the cyclists during competition, and changes in the angles of the lumbar spine were measured. An electromyographic study was done of the abdominal, lumbar, and thoracic paravertebral muscles. RESULTS: The cyclists' positions involved a change from discal lordosis to kyphosis. To obtain a more aerodynamic position, the cyclists flexed the hip and made the pelvis horizontal without changing disc angles. The contraction of paravertebral lumbar muscles was proportional to pedalling intensity and decreased in more aerodynamic positions. The tone of the paravertebral thoracic muscles depended on the extent of cervical hyperextension. Abdominal muscles remained relaxed in all bicycle positions and with all pedalling intensities. CONCLUSIONS: The changes observed could modify the normal biomechanics of the lumbar spine, but the overall mechanical load on the spine is reduced by shifting weight onto the upper limbs. The imbalance that occurs between the activity of flexor and extensor muscles could cause lumbar pain in persons without proper physical preparation. 相似文献
2.
Jorge Usabiaga Ali Erol George Bebis Richard Boyle Xander Twombly 《Machine Vision and Applications》2009,21(1):1-15
Immersive virtual environments with life-like interaction capabilities have very demanding requirements including high-precision
motion capture and high-processing speed. These issues raise many challenges for computer vision-based motion estimation algorithms.
In this study, we consider the problem of hand tracking using multiple cameras and estimating its 3D global pose (i.e., position
and orientation of the palm). Our interest is in developing an accurate and robust algorithm to be employed in an immersive
virtual training environment, called “Virtual GloveboX” (VGX) (Twombly et al. in J Syst Cybern Inf 2:30–34, 2005), which is
currently under development at NASA Ames. In this context, we present a marker-based, hand tracking and 3D global pose estimation
algorithm that operates in a controlled, multi-camera, environment built to track the user’s hand inside VGX. The key idea
of the proposed algorithm is tracking the 3D position and orientation of an elliptical marker placed on the dorsal part of
the hand using model-based tracking approaches and active camera selection. It should be noted that, the use of markers is
well justified in the context of our application since VGX naturally allows for the use of gloves without disrupting the fidelity
of the interaction. Our experimental results and comparisons illustrate that the proposed approach is more accurate and robust
than related approaches. A byproduct of our multi-camera ellipse tracking algorithm is that, with only minor modifications,
the same algorithm can be used to automatically re-calibrate (i.e., fine-tune) the extrinsic parameters of a multi-camera
system leading to more accurate pose estimates. 相似文献
3.
Jorge Usabiaga George Bebis Ali Erol Mircea Nicolescu Monica Nicolescu 《Computational Intelligence》2007,23(4):484-496
Recognizing human actions from video has been a challenging problem in computer vision. Although human actions can be inferred from a wide range of data, it has been demonstrated that simple human actions can be inferred by tracking the movement of the head in 2D. This is a promising idea as detecting and tracking the head is expected to be simpler and faster because the head has lower shape variability and higher visibility than other body parts (e.g., hands and/or feet). Although tracking the movement of the head alone does not provide sufficient information for distinguishing among complex human actions, it could serve as a complimentary component of a more sophisticated action recognition system. In this article, we extend this idea by developing a more general, viewpoint invariant, action recognition system by detecting and tracking the 3D position of the head using multiple cameras. The proposed approach employs Principal Component Analysis (PCA) to register the 3D trajectories in a common coordinate system and Dynamic Time Warping (DTW) to align them in time for matching. We present experimental results to demonstrate the potential of using 3D head trajectory information to distinguish among simple but common human actions independently of viewpoint. 相似文献
1