首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
CEI信息导航新动向特约主持人/朱召法进入1997年,CEI信息导航开始了新的征程。CEInet信息导航事业部将进一步加大信息条目的收集,力求迅速增加CEI信息导航的信息覆盖面,努力满足日益增长的信息查询需要。为了活跃信息的内容、增加信息服务的吸引力...  相似文献   

2.
研究了对MicrosoftC6.00编程的导航软件采用固化金加载设计的新技术。实现了C语言导航软件在无操作系统80286/80386“裸机”上的运行,较好地解决了C语言应用程序的固化和加载,使得用C语言开发微机的过程更为简捷和方便,并且完全保留了C语言特有的移植性好、应用性广、程序设计自由度大、表达简洁的优点,特别适合大型复杂软件的设计。本技术同样适用于一般工业控制系统。  相似文献   

3.
CEI信息导航的回顾与展望信息导航,顾名思义就是在信息的海洋中为信息用户提供导航服务的联机信息检索引擎。在当今Internet世界中,如果没有像Yahoo、Altavista、Infosek等一样的Internet信息检索引擎,Internet就会变...  相似文献   

4.
化学专业化WEB站点建设与Internet化学资源导航   总被引:12,自引:5,他引:7  
分析了国内外化学专业化WEB站点建设的现状,讨论了Internet化学资源的分类方法和导航机制,介绍了ChemSoft和ChemSource站点的建设思想和导航网页的设计。  相似文献   

5.
介绍了船舶导航通讯协议,利用VB6.0中的ActiveX控件MSCOMM.OCX对微串行通讯的操作从而实现了WINDOWS下的船舶导航通讯模拟器。  相似文献   

6.
本文提出了一种基于神经网络的超文本导航机理,并以超文本结构的CAI系统为例,给出了用于导航的神经网络的结构、学习算法和参数的选择方法  相似文献   

7.

气动模型辅助导航是一种新型的导航方法, 将描述飞行器飞行状态的气动模型信息与现有导航系统信息相融合, 可以提高导航精度和可靠性, 近年来受到国内外学者的关注, 有望成为飞行器的新型自主导航方法. 通过对气动模型辅助导航方法研究现状的调研和分析, 阐述了该导航方法的概念与原理; 分析了目前主要的3 种技术方案—–气动模型/惯性导航融合、气动模型/卫星导航融合、气动模型/惯性/卫星导航融合的各自特点; 对气动模型辅助导航方法与当前几种主要的辅助导航方法进行综合比较, 分析了该方法的技术优势与应用前景; 结合目前的研究现状, 探讨了气动模型辅助导航方法后续研究的关键技术和发展方向. 气动模型辅助导航方法与飞行器的气动模型特性、制导、导航和控制流程密切相关, 该方法的研究有助于推动导航、制导与控制(GNC) 3 个方向的各自发展和深度融合.

  相似文献   

8.
欧空局与ALCATEL-SEL业已完成空间应用的GPS接收机的预先研制工作。它们已经生产出两套GPS并交付给ESTEC无线线导航实验室。1993年夏,这种接收机中有一种经过专门鉴定后已在德国ASTROSPAS平台上飞行。曾用各种仿真方案对GPS接收机进行测试,其中包括差分及相对导航测试。  相似文献   

9.
多媒体CAI课件开发中若干问题的研究和实践   总被引:18,自引:0,他引:18  
基于Web的多媒体CAI课件是多媒体CAI课件与Web技术的结合。它将成为网络教学多媒体CAI课件的主要形式。该文结合一个实例,介绍了在基于Web的多媒体CAI课件开发过程中对课件页面进行优化的原则和方法,并详细介绍了课件中导航、查询、测试等关键功能的实现技术和方法。  相似文献   

10.
基于视觉的同时定位与地图构建方法综述   总被引:4,自引:1,他引:3  
基于视觉的自主导航与路径规划是移动机器人研究的关键技术,对基于视觉的计算机导航与同时定位及地图构建(SLAM)方法近三十年的发展进行了总结和展望。将视觉导航分为室内导航和室外导航,并详细阐述了每一种子类型的特点和方法。对于室内视觉导航,列举了经典导航模型和技术方法,探讨了解决SLAM问题的最新进展:HTM-SLAM算法和基于特征的算法;对室外视觉导航,阐述了国际国内目前的研究动态。  相似文献   

11.
公路汽车视觉导航中运动目标的检测与跟踪方法的研究   总被引:4,自引:0,他引:4  
通过对室外行驶车辆上所采集的长序列图象的处理与分析,研究公路汽车计算机辅助礼堂导航中运动目标的检测与跟踪问题。算法的核心是根据图象中的水平、垂直边缘及目标尾部的局部对称性检测和图象中目标的存在,进而建立描述目标与摄象机之间的相对运动的状态模型和表示目标循征的透视投影模型,应用扩展的卡尔曼滤波器进行3维运动参数估计。  相似文献   

12.
This paper describes a new approach for autonomous road following for an unmanned air vehicle (UAV) using a visual sensor. A road is defined as any continuous, extended, curvilinear feature, which can include city streets, highways, and dirt roads, as well as forest-fire perimeters, shorelines, and fenced borders. To achieve autonomous road-following, this paper utilizes Proportional Navigation as the basis for the guidance law, where visual information is directly fed back into the controller. The tracking target for the Proportional Navigation algorithm is chosen as the position on the edge of the camera frame at which the road flows into the image. Therefore, each frame in the video stream only needs to be searched on the edge of the frame, thereby significantly reducing the computational requirements of the computer vision algorithms. The tracking error defined in the camera reference frame shows that the Proportional Navigation guidance law results in a steady-state error caused by bends and turns in the road, which are perceived as road motion. The guidance algorithm is therefore adjusted using Augmented Proportional Navigation Guidance to account for the perceived road accelerations and to force the steady-state error to zero. The effectiveness of the solution is demonstrated through high-fidelity simulations, and with flight tests using a small autonomous UAV.  相似文献   

13.
This paper describes an on-board vision sensor system that is developed specifically for small unmanned vehicle applications. For small vehicles, vision sensors have many advantages, including size, weight, and power consumption, over other sensors such as radar, sonar, and laser range finder, etc. A vision sensor is also uniquely suited for tasks such as target tracking and recognition that require visual information processing. However, it is difficult to meet the computing needs of real-time vision processing on a small robot. In this paper, we present the development of a field programmable gate array-based vision sensor and use a small ground vehicle to demonstrate that this vision sensor is able to detect and track features on a user-selected target from frame to frame and steer the small autonomous vehicle towards it. The sensor system utilizes hardware implementations of the rank transform for filtering, a Harris corner detector for feature detection, and a correlation algorithm for feature matching and tracking. With additional capabilities supported in software, the operational system communicates wirelessly with a base station, receiving commands, providing visual feedback to the user and allowing user input such as specifying targets to track. Since this vision sensor system uses reconfigurable hardware, other vision algorithms such as stereo vision and motion analysis can be implemented to reconfigure the system for other real-time vision applications.  相似文献   

14.
Visual navigation is a challenging issue in automated robot control. In many robot applications, like object manipulation in hazardous environments or autonomous locomotion, it is necessary to automatically detect and avoid obstacles while planning a safe trajectory. In this context the detection of corridors of free space along the robot trajectory is a very important capability which requires nontrivial visual processing. In most cases it is possible to take advantage of the active control of the cameras. In this paper we propose a cooperative schema in which motion and stereo vision are used to infer scene structure and determine free space areas. Binocular disparity, computed on several stereo images over time, is combined with optical flow from the same sequence to obtain a relative-depth map of the scene. Both the time to impact and depth scaled by the distance of the camera from the fixation point in space are considered as good, relative measurements which are based on the viewer, but centered on the environment. The need for calibrated parameters is considerably reduced by using an active control strategy. The cameras track a point in space independently of the robot motion and the full rotation of the head, which includes the unknown robot motion, is derived from binocular image data. The feasibility of the approach in real robotic applications is demonstrated by several experiments performed on real image data acquired from an autonomous vehicle and a prototype camera head  相似文献   

15.
This paper presents an embedded omni-vision navigation system which involves landmark recognition, multi-object tracking, and vehicle localization. A new tracking algorithm, the feature matching embedded particle filter, is proposed. Landmark recognition is used to provide the front-end targets. A global localization method for omni-vision based on coordinate transformation is also proposed. The digital signal processor (DSP) provides a hardware platform for on-board tracker. Dynamic navigator employs DSP tracker to follow the landmarks in real time during the arbitrary movement of the vehicle and computes the position for localization based on time sequence images analysis. Experimental results demonstrated that the navigator can efficiently offer the vehicle guidance.  相似文献   

16.
Underwater visual inspection is an important task for checking the structural integrity and biofouling of the ship hull surface to improve the operational safety and efficiency of ships and floating vessels. This paper describes the development of an autonomous in‐water visual inspection system and its application to visual hull inspection of a full‐scale ship. The developed system includes a hardware vehicle platform and software algorithms for autonomous operation of the vehicle. The algorithms for vehicle autonomy consist of the guidance, navigation, and control algorithms for real‐time and onboard operation of the vehicle around the hull surface. The environmental perception of the developed system is mainly based on optical camera images, and various computer vision and optimization algorithms are used for vision‐based navigation and visual mapping. In particular, a stereo camera is installed on the underwater vehicle to estimate instantaneous surface normal vectors, which enables high‐precision navigation and robust visual mapping, not only on flat areas but also over moderately curved hull surface areas. The development process of the vehicle platform and the implemented algorithms are described. The results of the field experiment with a full‐scale ship in a real sea environment are presented to demonstrate the feasibility and practical performance of the developed system.  相似文献   

17.
基于彩色立体视觉的障碍物快速检测方法   总被引:8,自引:0,他引:8  
Real-time obstacle detection method is a key technique for machine vision based mobile robot and au-tonomous land vehicle navigation in unstructured environments. In this paper o considering the real-time requirement for stereo matching algorithm, an adaptive color segmentation method for possible obstacle region detection is first developed based on the color feature, and a simple region based stereo matching algorithm of binocular vision for realobstacle recognition is also introduced. Obstacle detection is implemented by combining the road color adaptive seg-mentation method and region based stereovision method. Lots of experiment results show that the proposed approachcan detect obstacle quickly and effectively, and this algorithm is particularly suited for road environments in which the road is relatively flat and of roughly the same color.  相似文献   

18.
In this paper, we present a real‐time high‐precision visual localization system for an autonomous vehicle which employs only low‐cost stereo cameras to localize the vehicle with a priori map built using a more expensive 3D LiDAR sensor. To this end, we construct two different visual maps: a sparse feature visual map for visual odometry (VO) based motion tracking, and a semidense visual map for registration with the prior LiDAR map. To register two point clouds sourced from different modalities (i.e., cameras and LiDAR), we leverage probabilistic weighted normal distributions transformation (ProW‐NDT), by particularly taking into account the uncertainty of source point clouds. The registration results are then fused via pose graph optimization to correct the VO drift. Moreover, surfels extracted from the prior LiDAR map are used to refine the sparse 3D visual features that will further improve VO‐based motion estimation. The proposed system has been tested extensively in both simulated and real‐world experiments, showing that robust, high‐precision, real‐time localization can be achieved.  相似文献   

19.
Abstract Information about vehicles on the road is very important for the maintenance of traffic control in current complex traffic condition. Images of vehicles are captured by vehicle-directed cameras. This paper proposes a new vehicle tracking mechanism using license plate recognition technology, which is essential to having information about vehicles on the roads. The proposed method is a real-time processing system using multistep image processing, as well as recognition and tracking processes from 2D and 3D images. The experimental results of real environmental images in recognition and tracking using the proposed method are shown.  相似文献   

20.
We present a system that estimates the motion of a stereo head, or a single moving camera, based on video input. The system operates in real time with low delay, and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize‐and‐test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene or the motion is necessary. The visual estimates can also be used in conjunction with information from other sources, such as a global positioning system, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive, and handheld platforms. We focus on results obtained with a stereo head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real time purely from images over previously unseen distances (600 m) and periods of time. © 2006 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号