共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Important aspects of present-day humanoid robot research is to make such robots look realistic and human-like, both in appearance,
as well as in motion and mannerism. In this paper, we focus our study on advanced control leading to realistic motion coordination
for a humanoid’s robot neck and eyes while tracking an object. The motivating application for such controls is conversational
robotics, in which a robot head “actor” should be able to detect and make eye contact with a human subject. Therefore, in
such a scenario, the 3D position and orientation of an object of interest in space should be tracked by the redundant head–eye
mechanism partly through its neck, and partly through its eyes. In this paper, we propose an optimization approach, combined
with a real-time visual feedback to generate the realistic robot motion and robustify it. We also offer experimental results
showing that the neck–eye motion obtained from the proposed algorithm is realistic comparing to the head–eye motion of humans. 相似文献
4.
《Computer Vision and Image Understanding》2010,114(8):915-927
The position of a world point’s solar shadow depends on its geographical location, the geometrical relationship between the orientation of the sunshine and the ground plane where the shadow casts. This paper investigates the property of solar shadow trajectories on a planar surface and shows that camera parameters, latitude, longitude and shadow casting objects’ relative height ratios can be estimated from two observed shadow trajectories. One contribution is to recover the horizon line and metric rectification matrix from four observations of two shadow tips. The other contribution is that we use the design of an analemmatic sundial to get the shadow conic and furthermore recover the camera’s geographical location. The proposed method does not require the shadow casting objects or a vertical object to be visible in the recovery of camera calibration. This approach is thoroughly validated on both synthetic and real data, and tested against various sources of errors including noise, number of observations, objects locations, and camera orientations. We also present applications to image-based metrology. 相似文献
5.
This paper presents a simple approach to capturing the appearance and structure of immersive scenes based on the imagery acquired with an omnidirectional video camera. The scheme proceeds by combining techniques from structure-from-motion with ideas from image-based rendering. An interactive photogrammetric modeling scheme is used to recover the locations of a set of salient features in the scene (points and lines) from image measurements in a small set of keyframe images. The estimates obtained from this process are then used as a basis for estimating the position and orientation of the camera at every frame in the video clip. By augmenting the video sequence with pose information, we provide the end-user with the ability to index the video sequence spatially as opposed to temporally. This allows the user to explore the immersive scene by interactively selecting the desired viewpoint and viewing direction 相似文献
6.
I. E. Zaramenskikh M. Yu. Ovchinnikov I. V. Ritus 《Mathematical Models and Computer Simulations》2010,2(1):9-21
The paper is devoted to the development of a control algorithm to nullify the relative secular drift due to the Earth’s oblateness
for the motion of Formation Flying. The chief satellite’s orbit is assumed to be circular and not controlled to maintain the
formation. A deputy satellite is equipped with a passive magnetic attitude control system with a permanent magnet and low-propulsion
thruster, distributed along the principal inertia axis in the same direction. Research is conducted on the possibility of
eliminating the relative secular drift through limited control. The limitation consists in the constraint in the direction
and magnitude of feasible control. In the present work, the control to eliminate the relative secular drift is analytically
developed. The analytical results are approved by numerical simulation of the satellite motion, such as, of the first Russian
nanosatellite TNS-0 No. 1. 相似文献
7.
Carmen Monroy Rafael Kelly Marco Arteaga Eusebio Bugarin 《Journal of Intelligent and Robotic Systems》2007,49(2):171-187
Visual servoing is a powerful approach to enlarge the applications of robotic systems by incorporating visual information
into the control system. On the other hand, teleoperation – the use of machines in a remote way – is increasing the number
of applications in many domains. This paper presents a remote visual servoing system using only partial camera calibration
and exploiting the high bandwidth of Internet2 to stream video information. The underlying control scheme is based on the
image-based philosophy for direct visual servoing – computing the applied torque inputs to the robot based in error signals
defined in the image plane – and evoking a velocity field strategy for guidance. The novelty of this paper is a remote visual
servoing with the following features: (1) full camera calibration is unnecessary, (2) direct visual servoing does not neglect
the robot nonlinear dynamics, and (3) the novel velocity field control approach is utilized. Experiments carried out between
two laboratories demonstrated the effectiveness of the application.
Work partially supported by CONACyT grant 45826 and CUDI. 相似文献
8.
A. Yu. Aleksandrov K. A. Antipov A. V. Platonov A. A. Tikhonov 《Journal of Computer and Systems Sciences International》2016,55(2):296-309
An artificial Earth satellite in a circular equatorial orbit is considered. We analyze the possibility of a satellite’s three-axis stabilization in the König coordinate system using an electrodynamic control system exploiting Lorentz and magnetic control torques. Conditions allowing the electromagnetic control to solve the problem for a gravitational torque disturbance are obtained. In the nonlinear formulation, sufficient conditions for the asymptotic stability of the satellite equilibrium position are obtained. 相似文献
9.
This paper proposes a new approach to image-based rendering that generates an image viewed from an arbitrary camera position and orientation by rendering optical flows extracted from reference images. To derive valid optical flows, we develop an analysis technique that improves the quality of stereo matching. Without using any special equipments such as range cameras, this technique constructs reliable optical flows from a sequence of matching results between reference images. We also derive validity conditions of optical flows and show that the obtained flows satisfy those conditions. Since environment geometry is inferred from the optical flows, we are able to generate more accurate images with this additional geometric information. Our approach makes it possible to combine an image rendered from optical flows with an image generated by a conventional rendering technique through a simple Z-buffer algorithm. 相似文献
10.
Global Optimization through Rotation Space Search 总被引:2,自引:0,他引:2
This paper introduces a new algorithmic technique for solving certain problems in geometric computer vision. The main novelty
of the method is a branch-and-bound search over rotation space, which is used in this paper to determine camera orientation.
By searching over all possible rotations, problems can be reduced to known fixed-rotation problems for which optimal solutions
have been previously given. In particular, a method is developed for the estimation of the essential matrix, giving the first
guaranteed optimal algorithm for estimating the relative pose using a cost function based on reprojection errors. Recently
convex optimization techniques have been shown to provide optimal solutions to many of the common problems in structure from
motion. However, they do not apply to problems involving rotations. The search method described in this paper allows such
problems to be solved optimally. Apart from the essential matrix, the algorithm is applied to the camera pose problem, providing
an optimal algorithm. The approach has been implemented and tested on a number of both synthetically generated and real data
sets with good performance.
NICTA is funded by the Australian Government’s Backing Australia’s Ability initiative, in part through the Australian Research
Council. 相似文献
11.
In this article, we propose a localization scheme for a mobile robot based on the distance between the robot and moving objects.
This method combines the distance data obtained from ultrasonic sensors in a mobile robot, and estimates the location of the
mobile robot and the moving object. The movement of the object is detected by a combination of data and the object’s estimated
position. Then, the mobile robot’s location is derived from the a priori known initial state. We use kinematic modeling that
represents the movement of a robot and an object. A Kalman-filtering algorithm is used for addressing estimation error and
measurement noise. Throughout the computer simulation experiments, the performance is verified. Finally, the results of experiments
are presented and discussed. The proposed approach allows a mobile robot to seek its own position in a weakly structured environment.
This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January
25–27, 2007 相似文献
12.
基于图像的视觉伺服方法,图像的变化直接解释为摄像机的运动,而不是直接对机械手末端实现笛卡尔速度控制,导致机械手的运动轨迹迂回,产生摄像机回退现象.针对这一问题,提出了将旋转和平移分离并先实现旋转的视觉伺服方案.该方案计算量小,系统响应时间短,解决了图像旋转和平移间的干扰,克服了传统基于图像视觉伺服产生的摄像机回退现象,实现了时间和路径的最优控制.并用传统IBVS的控制律和摄像机成像模型解释了回退现象的产生原因.二维运动仿真说明了提出方案的有效性. 相似文献
13.
The perspective reconstruction of a spheroid’s position and orientation from a single image plane ellipse is derived, by direct
inversion of the projection equations, assuming the semi-major and semi-minor axes are known. Attention is paid to the geometric
interpretation of the reconstruction. The reconstruction is formulated and reduced to an eigenvalue problem, to yield 2 solutions
for the spheroid’s position and orientation. The symmetry of the polar planes for these solutions are then deduced. 相似文献
14.
Wearable Visual Robots 总被引:3,自引:0,他引:3
Research work reported in the literature in wearable visual computing has used exclusively static (or non-active) cameras,
making the imagery and image measurements dependent on the wearer’s posture and motions. It is assumed that the camera is
pointing in a good direction to view relevant parts of the scene at best by virtue of being mounted on the wearer’s head,
or at worst wholly by chance. Even when pointing in roughly the correct direction, any visual processing relying on feature
correspondence from a passive camera is made more difficult by the large, uncontrolled inter-image movements which occur when
the wearer moves, or even breathes. This paper presents a wearable active visual sensor which is able to achieve a level of
decoupling of camera movement from the wearer’s posture and motions by a combination of inertial and visual sensor feedback
and active control. The issues of sensor placement, robot kinematics and their relation to wearability are discussed. The
performance of the prototype robot is evaluated for some essential visual tasks. The paper also discusses potential applications
for this kind of wearable robot. 相似文献
15.
Michela Goffredo Imed Bouchrika John N. Carter Mark S. Nixon 《Multimedia Tools and Applications》2010,50(1):75-94
Many studies have confirmed that gait analysis can be used as a new biometrics. In this research, gait analysis is deployed
for people identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless
gait analysis that does not require camera calibration and works with a wide range of walking directions. These properties
make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their
behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real video sequences, with subjects walking
freely along different walking directions, have been performed. Since the choice of the cameras’ characteristics is a key-point
for the development of a smart surveillance system, the performance of the proposed approach is measured with respect to different
video properties: spatial resolution, frame-rate, data compression and image quality. The obtained results show that markerless
gait analysis can be achieved without any knowledge of camera’s position and subject’s pose. The extracted gait parameters
allow recognition of people walking from different views with a mean recognition rate of 92.2% and confirm that gait can be
effectively used for subjects’ identification in a multi-camera surveillance scenario. 相似文献
16.
The distinctive features of wireless multimedia sensor networks (WMSNs) include application-specific quality-of-service (QoS)
requirements and limited energy supply, with which each node makes its own decisions selfishly. Therefore this paper presents
a power control game theoretic approach for WMSNs by studying the effect of transmission power on QoS and energy efficiency.
The game approach determines the transmission strategy using utility optimization according to the fluctuation of channel
states. Here, the utility function is defined by effective throughput per unit power while satisfying the user’s delay QoS
constraints. The existence and uniqueness of Nash equilibrium for the proposed game are proved. Finally, the simulation results
show that each user chooses the optimal transmission power to maximize its utility based on other constant parameters and
the effects of delay constraints on the user’s utility are quantified as well. 相似文献
17.
We present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control
the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision.
The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location, but also at other
locations. The suggested technique can guide a robot manipulator to the desired location without needing such a priori knowledge
as the relative distance to the desired location or a model of the object, even if the initial positioning error is large.
We describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing
system is illustrated by a simulation and by experimental results, and compared with the conventional method for an assembling
robot.
This work was presented in part at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January
19–22, 1999 相似文献
18.
The position and orientation of moving platform mainly depends on global positioning system and inertial navigation system in the field of low-altitude surveying, mapping and remote sensing and land-based mobile mapping system. However, GPS signal is unavailable in the application of deep space exploration and indoor robot control. In such circumstances, image-based methods are very important for self-position and orientation of moving platform. Therefore, this paper firstly introduces state of the art development of the image-based self-position and orientation method (ISPOM) for moving platform from the following aspects: 1) A comparison among major image-based methods (i.e., visual odometry, structure from motion, simultaneous localization and mapping) for position and orientation; 2) types of moving platform; 3) integration schemes of image sensor with other sensors; 4) calculation methodology and quantity of image sensors. Then, the paper proposes a new scheme of ISPOM for mobile robot — depending merely on image sensors. It takes the advantages of both monocular vision and stereo vision, and estimates the relative position and orientation of moving platform with high precision and high frequency. In a word, ISPOM will gradually speed from research to application, as well as play a vital role in deep space exploration and indoor robot control. 相似文献
19.
Jorge Usabiaga Ali Erol George Bebis Richard Boyle Xander Twombly 《Machine Vision and Applications》2009,21(1):1-15
Immersive virtual environments with life-like interaction capabilities have very demanding requirements including high-precision
motion capture and high-processing speed. These issues raise many challenges for computer vision-based motion estimation algorithms.
In this study, we consider the problem of hand tracking using multiple cameras and estimating its 3D global pose (i.e., position
and orientation of the palm). Our interest is in developing an accurate and robust algorithm to be employed in an immersive
virtual training environment, called “Virtual GloveboX” (VGX) (Twombly et al. in J Syst Cybern Inf 2:30–34, 2005), which is
currently under development at NASA Ames. In this context, we present a marker-based, hand tracking and 3D global pose estimation
algorithm that operates in a controlled, multi-camera, environment built to track the user’s hand inside VGX. The key idea
of the proposed algorithm is tracking the 3D position and orientation of an elliptical marker placed on the dorsal part of
the hand using model-based tracking approaches and active camera selection. It should be noted that, the use of markers is
well justified in the context of our application since VGX naturally allows for the use of gloves without disrupting the fidelity
of the interaction. Our experimental results and comparisons illustrate that the proposed approach is more accurate and robust
than related approaches. A byproduct of our multi-camera ellipse tracking algorithm is that, with only minor modifications,
the same algorithm can be used to automatically re-calibrate (i.e., fine-tune) the extrinsic parameters of a multi-camera
system leading to more accurate pose estimates. 相似文献
20.
《Robotics and Autonomous Systems》2014,62(10):1398-1407
In order to develop an autonomous mobile manipulation system that works in an unstructured environment, a modified image-based visual servo (IBVS) controller using hybrid camera configuration is proposed in this paper. In particular, an eye-in-hand web camera is employed to visually track the target object while a stereo camera is used to measure the depth information online. A modified image-based controller is developed to utilize the information from the two cameras. In addition, a rule base is integrated into the visual servo controller to adaptively tune its gain based on the image deviation data so as to improve the response speed of the controller. A physical mobile manipulation system is developed and the developed IBVS controller is implemented. The experimental results obtained using the systems validate the developed approach. 相似文献