首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
在惯性测量领域,单纯利用加速度二次积分的方法并不能准确感知目标对象移动的距离.加速度传感器在感知呈线性运动的目标对象时较为准确和实用,但在三维空间运动时它的坐标轴会随物体发生方向的改变而不断漂移.为解决该问题,提出了一种基于角度补偿的手机多传感器数据融合测距算法(ADC-R),使用加速度传感器测量物体运动的加速度,作为计算位移的原始数据;采用手机陀螺仪传感器测量运动物体的角速度,并以旋转矢量传感器输出的数据作为参数把手机动态坐标系下测得的加速度值空间坐标转换到静态的参考坐标系下,然后进行数据融合完成角度补偿计算;最后根据物理学加速度和位移的关系运用数学积分方法和进一步修正误差的技术得到最终移动的距离.实验结果表明此方法在近距离测距方面精度较高,优于加速度积分算法和加速度与陀螺仪融合算法.  相似文献   

2.
基于MEMS技术的复合型智能传感器设计   总被引:4,自引:0,他引:4  
把微机电系统(MEMS)技术的加速度敏感元件和微处理器有机结合,借助智能算法,设计了一种多功能的复合传感器,可同时测量运动物体的的振动、冲击和倾斜角度。试验表明:该传感器能够足够同时精确地检测运动物体的振动、冲击和倾斜角度信息,用于汽车中时,能够及时检测汽车的动态信息与车体的姿态。  相似文献   

3.
本文是基于单片机STC12C5A60S2的倾斜检测仪。倾斜检测仪的核心器件是倾斜传感器ADXL345,它可以实时测量出不同轴向的加速度分量,从而得到不同轴向的倾斜角度。当ADXL345与水平面发生倾斜时,传感器内部的敏感元件差分电容失衡,差分电容的失衡比例与加速度的大小呈正比,通过相敏解调得到倾斜加速度的大小与方向。此时传感器的电信号通过放大,并用IIC通信模式将数据传输给单片机,单片机经过读取数据、运算数据使数码管显示当前方向倾斜角度,并与设定角度进行数据比较,在倾斜到设定角度时实施报警,提醒工作人员做出相应的措施。  相似文献   

4.
在地下勘测,钻井探矿等领域,需要测斜仪对钻头的姿态进行精确定位与控制,实时监测地底下钻头的姿态变化。现有的测斜仪通常为三分量磁通门传感器与三分量加速度传感器相结合,其结构复杂,且磁通门传感器与加速度传感器的轴向要求方向一致,因此需要人工调节与算法调节,加大了前期传感器布置与补偿的计算难度。本文研究了新型测斜传感器,采用环形铁芯设计和铁芯不固定的方法,设计了双分量环形磁通门传感器,并通过铁芯与磁通门传感器的敏感轴方向上磁通量之间的角度关系,通过三维坐标轴旋转计算得出测量被测物体的俯仰角等姿态变化。经过试验表明,新型测斜传感器能对物体的姿态角进行精准测量,精度高,且省去了加速度传感器,简化了传感器结构和测量参数,并且简化了角度测量算法与正交度补偿算法。  相似文献   

5.
《传感器世界》2012,(2):37-37
日本ZMP株式会社推出无线9轴运动传感器“e-nuvoIMU.Z2”0新产品配备3轴加速度传感器、3轴陀螺仪传感器和3轴地磁传感器,通过组合使用这3种传感器,可以对人或物体进行动、静姿势推测和三维动作测量。测量数据最远可无线发送100m。可以应用于长距离的步行测量以及在室外的运动测量等。  相似文献   

6.
一种安装在机器人手指上的多元超声测距阵列已研制成功。这个测距系统由凹面的压电聚合物超声传感器构成,按三维“凹形”分布排列。这样分布减少了传感器的辐射,增加了从手指底部到尖部的工作频率。每一个起伏的单元都与计算控制的扫描和放大单元相连,起着发射器相接收器的作用。文中讨论了超声传感器阵列的一般特性,还提到了利用三角测量法探测和确定任意放置的物体表面的方位,以及利用超声传感探测三维物体的存在和确定它们形状的有关问题。  相似文献   

7.
当机器手抓取时物体受力信息检测是抓取过程顺利进行的基础,检测三维方向上的力便可充分反映物体的受力信息。当前用于抓取过程中三维力检测的触觉传感器还存在着一些不足,基于此,论文拟设计一种基于PVDF的三维力机器人触觉传感器。论文展示了传感器的结构设计,建立了压电薄膜及传感头结构的数学模型,设计了调理电路并对传感器进行测试和验证,结果表明该传感器能有效检测机器手抓取过程中的三维力信息。  相似文献   

8.
传统基于加速度传感器的运动识别方法通常假设传感设备是固定放置的,当传感设备的放置方式或位置偏离预定设置时识别性能会受到极大影响。然而,在普适计算环境下使用最为广泛的传感设备——智能手机,通常无法预先固定其放置方式和位置。为解决此问题,提出了一种基于加速度传感器、与放置方式和位置无关的运动识别方法。该方法首先基于一种降维算法将原始三维加速度信号处理成与放置方式无关的一维信号,然后借鉴生物信息学中的"模体"(Motif)概念抽取一维信号中与放置位置无关的模式特征,最后基于模式特征构建向量空间模型(VSM)对运动进行识别。实验结果表明,该方法在不固定传感设备放置方式和位置条件下的运动识别率达到81.41%。  相似文献   

9.
姿态参数的测试是运动物体测试过程中的重要指标。针对角速度陀螺仪在高速运动中姿态角的测量误差较大,提出了基于加速度和线圈式磁传感器的姿态测量组合。线圈式磁传感器的输出值包含了载体的转速和角速度信息的综合,求出载体的角速度信息,通过扩展的全姿态四元数算法进行数据融合和姿态解算。在三轴转台上进行半实物的仿真实验,验证算法的正确性,在解算过程中无须知道当地磁场大小,方法简单易行。  相似文献   

10.
提出了一种无线运动传感器,它能够根据测量得到的三维加速度值、角速度值、磁感应强度值,实时计算出传感器的三维方向信息。它具有体积小、质量轻、成本低等优点,可以应用在穿戴式计算领域。提出了基于加速度计的电源管理算法,可用于延长传感器的工作时间。实验结果表明:相比于市场上已有的同类传感器,该传感器能够显著降低功耗,延长电池使用寿命。  相似文献   

11.
为了实现对物体运动位移的检测,设计了一种以国产高精度MEMS电容式加速度传感器MSCA3002为核心,24位高精度A/D转换器ADS1255,高性能ARM处理器LM3S2B93为主控制器的位移检测系统,并详细给出该系统的硬件电路及其软件算法设计。系统将传感器检测到的物体运动的加速度,经过积分算法转换为物体运动位移。实验结果表明:系统采样精度高、速度快、误差小,A/D转换器对加速度信号的检测精度能达到0.4%,积分后对位移的测量精度能控制在3%左右,很好地实现了对运动位移的检测。  相似文献   

12.
To measure the 3D shape of large objects, scanning by a moving range sensor is one of the most efficient methods. However, if we use moving range sensors, the obtained data have some distortions due to the movement of the sensor during the scanning process. In this paper, we propose a method for recovering correct 3D range data from a moving range sensor by using the multiple view geometry under projective projections in space-time. We assume that range sensor radiates laser beams in a raster scan order, and they are observed from two cameras. We first show that we can deal with range data as 2D images, and show that the extended multiple view geometry can be used for representing the relationship between the 2D image of range data and the 2D image of cameras. We next show that the extended multiple view geometry can be used for rectifying 3D data obtained by the moving range sensor. The method is implemented and tested in synthetic images and range data. The stability of the recovered 3D shape is also evaluated.  相似文献   

13.
In this paper, a 3D pose attitude estimation system using inertial sensors was developed to provide feedback motion and attitude information for a humanoid robot. It has a very effective switching structure and composed of three modules, a motion acceleration detector, a pseudo-accelerometer output estimator, and a linear acceleration estimator. The switching structure based on probability enables a tactful feedback loop for the extended Kalman filter inside the sensor system. Specially designed linear-rotation test equipment was built, and the experimental results showed its fast convergence to actual values in addition to its excellent responses. The output of the proposed 3D sensor can be transmitted to a humanoid at a frequency of 200 Hz.  相似文献   

14.
The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

15.
Motion segmentation using occlusions   总被引:4,自引:0,他引:4  
We examine the key role of occlusions in finding independently moving objects instantaneously in a video obtained by a moving camera with a restricted field of view. In this problem, the image motion is caused by the combined effect of camera motion (egomotion), structure (depth), and the independent motion of scene entities. For a camera with a restricted field of view undergoing a small motion between frames, there exists, in general, a set of 3D camera motions compatible with the observed flow field even if only a small amount of noise is present, leading to ambiguous 3D motion estimates. If separable sets of solutions exist, motion-based clustering can detect one category of moving objects. Even if a single inseparable set of solutions is found, we show that occlusion information can be used to find ordinal depth, which is critical in identifying a new class of moving objects. In order to find ordinal depth, occlusions must not only be known, but they must also be filled (grouped) with optical flow from neighboring regions. We present a novel algorithm for filling occlusions and deducing ordinal depth under general circumstances. Finally, we describe another category of moving objects which is detected using cardinal comparisons between structure from motion and structure estimates from another source (e.g., stereo).  相似文献   

16.
This paper presents a cost-efficient, real-time vision-sensor system for identifying, locating and tracking objects that are unknown and randomly placed on a moving conveyor belt. The visual information obtained from a conventional frame-store unit and an end-effector based proximity sensor outputs are incorporated in a fuzzy-logic control algorithm to make the robotic manipulator grasp moving objects. The robot movements are going to be the result of the comparative measurements made by the sensors after the motion of the moving target is predicted and the gripper is brought into a zone close to the object to be grasped by the application of a vision system. The mobile object is traced by controlling the motion of the end-effector with an end-effector based infrared proximity sensors and conveyor position encoder by keeping the gripper's axis to pass through a median plane of the moving object. With this procedure and using the fuzzy-logic control, the system is adapted to pursue of a mobile object. Laboratory experiments are presented to demonstrate the performance of this system. ©1999 John Wiley & Sons, Inc.  相似文献   

17.
With the advent of mobile robots and inboard vision sensors mounted directly on the robot's wrist, new kinds of problems lie in the image processing field as, for example, dynamic scene analysis or motion estimation. The lack of flexibility of real experiments led us to implement at IRISA a general simulation tool devoted to the study of robots using moving vision sensors. VISYR allows us to simulate the image perceived by a robot of its environment during its motion.The first part of the paper is devoted to the modelling of the 3D scene containing complex objects and to the design of a suitable robotics vision sensor. In the second part, a new algorithm of dynamic management of the local data basis perceived by the sensor is presented. The parameters of the vision sensors are highly adjustable and VISYR is conceived to allow the fast development of algorithms using dynamic vision data.  相似文献   

18.
Scanning by a moving range sensor from the air is one of the most effective methods to obtain range data of large-scale objects since it can measure some regions invisible from the ground. The obtained data, however, have some distortions due to the sensor motion during the scanning period. Besides these distorted range data, there should be available range data sets taken by other sensors fixed on the ground. Based on the overlapping regions visible from the moving sensor and the fixed ones, we propose an extended alignment algorithm to rectify the distorted range data and to align the data to the models by the fixed sensors. By using CAD models, we estimate the accuracy and effectiveness of our proposed method. Then we apply it to some real data sets to prove the validity of the method.  相似文献   

19.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

20.
In robot navigation, one of the important and fundamental issues is to find positions of landmarks or vision sensors located around the robot. This paper proposes a method for reconstructing qualitative positions of multiple vision sensors from qualitative information observed by the vision sensors, i.e., motion directions of moving objects. In order to directly acquire the qualitative positions of points, the method proposed in this paper iterates the following steps: 1) observing motion directions (left or right) of moving objects with the vision sensors, 2) classifying the vision sensors into spatially classified pairs based on the motion directions, 3) acquiring three point constraints, and 4) propagating the constraints. Compared with the previous methods, which reconstruct the environment structure from quantitative measurements and acquire qualitative representations by abstracting it, this paper focuses on how to acquire qualitative positions of landmarks from low-level, simple, and reliable information (that is, “qualitative”). The method has been evaluated with simulations and also verified with observation errors  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号