首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Abstract— Today, high‐end simulation is demanding eye‐limiting resolution along with extremely large fields of view. This represents a tremendous challenge if none of the other desirable features and performance of traditional systems is to be lost. Image‐generating computers continue to become more capable, but the new demands placed upon them by these display technologies are proving difficult to realize at an economic price. This paper explores SEOS's investigation into this exciting new generation of simulation. A solution is outlined that delivers the required resolution, yet keeps the demands on the driving image‐generating computers to an acceptable level. At the same time, maintenance and alignment are simplified.  相似文献   

2.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

3.
We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each scene point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate scene rendering under different simulated illumination spectra. We are also able to infer information about the scene illumination. The approach was tested using a standard 8-bit black/white video camera and a fixed spatially varying spectral (interference) filter.  相似文献   

4.
星敏感器是一种高精度的空间姿态测量装置,精度标定是其高精度测量的重要保障.在分析影响大视场星敏感器测量误差因素的基础上,针对常用标定方法(如多项式畸变模型法和直接映射法)的缺点,提出遗传算法优化的BP神经网络算法进行大视场星敏感器标定.根据此方法建立了大视场星敏感器标定系统,并进行了实验测试.实验结果表明:该标定方法使得单星测角误差由标定前的0.14°降低到0.0053 °,有效提高了大视场星敏感器测角精度,并且代码效率高、稳定性高.  相似文献   

5.
ABSTRACT

Vegetation is an important land-cover type and its growth characteristics have potential for improving land-cover classification accuracy using remote-sensing data. However, due to lack of suitable remote-sensing data, temporal features are difficult to acquire for high spatial resolution land-cover classification. Several studies have extracted temporal features by fusing time-series Moderate Resolution Imaging Spectroradiometer data and Landsat data. Nevertheless, this method needs assumption of no land-cover change occurring during the period of blended data and the fusion results also present certain errors influencing temporal features extraction. Therefore, time-series high spatial resolution data from a single sensor are ideal for land-cover classification using temporal features. The Chinese GF-1 satellite wide field view (WFV) sensor has realized the ability of acquiring multispectral data with decametric spatial resolution, high temporal resolution and wide coverage, which contain abundant temporal information for improving land-cover classification accuracy. Therefore, it is of important significance to investigate the performance of GF-1 WFV data on land-cover classification. Time-series GF-1 WFV data covering the vegetation growth period were collected and temporal features reflecting the dynamic change characteristics of ground-objects were extracted. Then, Support Vector Machine classifier was used to land-cover classification based on the spectral features and their combination with temporal features. The validation results indicated that temporal features could effectively reflect the growth characteristics of different vegetation and finally improved classification accuracy of approximately 7%, reaching 92.89% with vegetation type identification accuracy greatly improved. The study confirmed that GF-1 WFV data had good performances on land-cover classification, which could provide reliable high spatial resolution land-cover data for related applications.  相似文献   

6.
目的 视觉显著性在众多视觉驱动的应用中具有重要作用,这些应用领域出现了从2维视觉到3维视觉的转换,从而基于RGB-D数据的显著性模型引起了广泛关注。与2维图像的显著性不同,RGB-D显著性包含了许多不同模态的线索。多模态线索之间存在互补和竞争关系,如何有效地利用和融合这些线索仍是一个挑战。传统的融合模型很难充分利用多模态线索之间的优势,因此研究了RGB-D显著性形成过程中多模态线索融合的问题。方法 提出了一种基于超像素下条件随机场的RGB-D显著性检测模型。提取不同模态的显著性线索,包括平面线索、深度线索和运动线索等。以超像素为单位建立条件随机场模型,联合多模态线索的影响和图像邻域显著值平滑约束,设计了一个全局能量函数作为模型的优化目标,刻画了多模态线索之间的相互作用机制。其中,多模态线索在能量函数中的权重因子由卷积神经网络学习得到。结果 实验在两个公开的RGB-D视频显著性数据集上与6种显著性检测方法进行了比较,所提模型在所有相关数据集和评价指标上都优于当前最先进的模型。相比于第2高的指标,所提模型的AUC(area under curve),sAUC(shuffled AUC),SIM(similarity),PCC(Pearson correlation coefficient)和NSS(normalized scanpath saliency)指标在IRCCyN数据集上分别提升了2.3%,2.3%,18.9%,21.6%和56.2%;在DML-iTrack-3D数据集上分别提升了2.0%,1.4%,29.1%,10.6%,23.3%。此外还进行了模型内部的比较,验证了所提融合方法优于其他传统融合方法。结论 本文提出的RGB-D显著性检测模型中的条件随机场和卷积神经网络充分利用了不同模态线索的优势,将它们有效融合,提升了显著性检测模型的性能,能在视觉驱动的应用领域发挥一定作用。  相似文献   

7.
新视角图像生成任务指通过多幅参考图像,生成场景新视角图像。然而多物体场景存在物体间遮挡,物体信息获取不全,导致生成的新视角场景图像存在伪影、错位问题。为解决该问题,提出一种借助场景布局图指导的新视角图像生成网络,并标注了全新的多物体场景数据集(multi-objects novel view Synthesis,MONVS)。首先,将场景的多个布局图信息和对应的相机位姿信息输入到布局图预测模块,计算出新视角下的场景布局图信息;然后,利用场景中标注的物体边界框信息构建不同物体的对象集合,借助像素预测模块生成新视角场景下的各个物体信息;最后,将得到的新视角布局图和各个物体信息输入到场景生成器中构建新视角下的场景图像。在MONVS和ShapeNet cars数据集上与最新的几种方法进行了比较,实验数据和可视化结果表明,在多物体场景的新视角图像生成中,所提方法在两个数据集上都有较好的效果表现,有效地解决了生成图像中存在伪影和多物体在场景中位置信息不准确的问题。  相似文献   

8.
This paper describes our recent experimental evaluation of Information‐Rich Virtual Environment (IRVE) interfaces. To explore the depth cue/visibility tradeoff between annotation schemes, we design and evaluate two information layout techniques to support search and comparison tasks. The techniques provide different depth and association cues between objects and their labels: labels were displayed either in the virtual world relative to their referent (Object Space) or on an image plane workspace (Viewport Space). The Software Field of View (SFOV) was controlled to 60° or 100° of vertical angle and two groups were tested: those running on a single monitor and those on a tiled nine‐panel display. Users were timed, tracked for correctness, and gave ratings for both difficulty and satisfaction on each task. Significant advantages were found for the Viewport interface, and for high SFOV. The interactions between these variables suggest special design considerations to effectively support search and comparison performance across monitor configurations and projection distortions. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
Abstract

A new imaging algorithm is presented for Synthetic Aperture Radar (SAR) that is exact in the sense that it is capable of producing a complex image with excellent geometrical, radiometrical and phase fidelity. No interpolations or significant approximations are required, yet the method accomplishes range curvature correction over the complete range swath. The key to the approach is a quadratic phase perturbation of the range linearly frequency modulated signals while in the range signal, azimuth frequency transform (Doppler) domain. Range curvature correction is completed by a phase multiply in the two-dimensional frequency domain. Other operations required are relatively conventional. The method is generalizable to imaging geometries encountered in squint and spotlight SAR, inverse SAR, seismics, sonar, and tomography.  相似文献   

10.
J.R. Banbury 《Displays》1983,4(2):89-96
Head-up displays in current production aircraft have a restricted field of view caused by the relatively small diameter of the collimating optics. There is a growing interest in alternative designs which make a better field of view available to the pilot. Several possible design options for achieving a wide field are outlined. The new methods usually rely on the properties of diffractive optical elements to achieve a satisfactory performance with respect to accuracy, photometric efficiency and sunlight rejection. Some advantages arising from the particular characteristics of diffractive elements are considered. Most wide field of view displays are of the ‘projected porthole’ type, ie the exit pupil of the system is not within the equipment but is instead projected to the observer's eye position. Definitions of the instantaneous and total fields of view are discussed and compared with those for the conventional head-up display. As wide field of view displays become more readily available it is important to establish whether the additional cost and bulk of the equipment is justified by gains in operational efficiency. The paper concludes by outlining some possible uses of the larger field.  相似文献   

11.
于明  邢章浩  刘依 《控制与决策》2023,38(9):2487-2495
目前大多数RGB-D显著目标检测方法在RGB特征和Depth特征的融合过程中采用对称结构,对两种特征进行相同的操作,忽视了RGB图像和Depth图像的差异性,易造成错误的检测结果.针对该问题,提出一种基于非对称结构的跨模态融合RGB-D显著目标检测方法,利用全局感知模块提取RGB图像的全局特征,并设计了深度去噪模块滤除低质量Depth图像中的大量噪声;再通过所提出的非对称融合模块,充分利用两种特征间的差异性,使用Depth特征定位显著目标,用于指导RGB特征融合,补足显著目标的细节信息,利用两种特征各自的优势形成互补.通过在4个公开的RGB-D显著目标检测数据集上进行大量实验,验证所提出的方法优于当前的主流方法.  相似文献   

12.
大视场双目立体视觉的摄像机标定   总被引:1,自引:0,他引:1  
针对大视场视觉测量应用,在分析摄像机成像模型的基础上,设计制作了可自由转动的十字靶标,实现了大视场双目视觉摄像机的精确标定。将十字靶标在测量空间内多次均匀摆放,两摄像机同步拍摄多幅靶标图像。由本质矩阵得到摄像机参数的初始值,采用自检校光束法平差得到摄像机参数的最优解。该方法不要求特征点共面,仅需要知道特征点之间的物理距离,降低了靶标制作难度。采用TN3DOMS.S进行了实测,在1500mm×1500mm的测量范围内测试标准标杆,误差均方值为0.06mm。  相似文献   

13.
人体行为识别是计算机视觉和模式识别领域内一个重要的研究方向。人体行为的复杂性和不同人执行同一动作的差异性,使得行为识别仍然是一个具有挑战性的课题。采用新一代传感技术的RGB-D相机能够同时记录RGB图像和深度图像,并能够实时提取骨骼点信息。充分利用以上信息,成为行为识别领域的研究热点和突破点。文中提出了一种新的基于高斯加权金字塔式梯度方向直方图的RGB图像特征提取方法,并构建了一种多模特征融合的行为识别框架。在UTKinect-Action3D,MSR-Action 3D和Florence 3D Actions 3个数据库上对本研究所提特征和框架进行实验,结果表明,所提框架在3个行为数据库上的识别正确率分别达到了97.5%,93.1%,91.7%,从而证明了该行为识别框架的有效性。  相似文献   

14.
Atchley P  Dressel J 《Human factors》2004,46(4):664-673
The purpose of these two experiments is to investigate one possible mechanism that might account for an increase in crash risk with in-car phone use: a reduction in the functional field of view. In two between-subjects experiments, college undergraduates performed a task designed to measure the functional field of view in isolation and while performing a hands-free conversational task. In both experiments, the addition of the conversational task led to large reductions in the functional field of view. Because similar reductions have been shown to increase crash risk, reductions in the functional field of view by conversation may be an important mechanism involved in increased risk for crashes with in-car phone use. Actual or potential applications of this research include improving driver performance.  相似文献   

15.
We present a practical system which can provide a textured full-body avatar within 3 s. It uses sixteen RGB-depth (RGB-D) cameras, ten of which are arranged to capture the body, while six target the important head region. The configuration of the multiple cameras is formulated as a constraint-based minimum set space-covering problem, which is approximately solved by a heuristic algorithm. The camera layout determined can cover the full-body surface of an adult, with geometric errors of less than 5 mm. After arranging the cameras, they are calibrated using a mannequin before scanning real humans. The 16 RGB-D images are all captured within 1 s, which both avoids the need for the subject to attempt to remain still for an uncomfortable period, and helps to keep pose changes between different cameras small. All scans are combined and processed to reconstruct the photorealistic textured mesh in 2 s. During both system calibration and working capture of a real subject, the high-quality RGB information is exploited to assist geometric reconstruction and texture stitching optimization.  相似文献   

16.
天基目标探测技术已成为当今空间研究领域的前沿性技术.深空目标由于距离较远,在探测器上为点目标成像,很难通过外形特征将其与恒星、行星等虚假目标区分.大视场星敏感器由于自带导航星库,能有效地将虚假目标中恒星剔除.依据真假目标在赤经坐标系下运动规律的不同,结合星敏感器的姿态矩阵,可将真正的目标从复杂的星空背景中准确地识别出来,从而达到安全防护目的.仿真结果验证了该方法的有效性.  相似文献   

17.
RGB-D相机(如微软的Kinect)能够在获取彩色图像的同时得到每个像素的深度信息,在移动机器人三维地图创建方向具有广泛应用。本文设计了一种利用RGB-D相机进行机器人自定位及创建室内场景三维模型的方法,该方法首先由RGB-D相机获取周围环境的连续帧信息;其次提取并匹配连续帧间的SURF特征点,通过特征点的位置变化计算机器人的位姿并结合非线性最小二乘优化算法最小化对应点的双向投影误差;最后结合关键帧技术及观察中心法将相机观测到的三维点云依据当前位姿投影到全局地图。本文选择三个不同的场景试验了该方法,并对比了不同特征点下该方法的效果,试验中本文方法在轨迹长度为5.88m情况下误差仅为0.023,能够准确地创建周围环境的三维模型。  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号