首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
从物体光照模型的角度出发,依据物体的颜色在运动过程中保持不变,表示物体色彩和饱和度的量在运动过程中也不变,提出了基于色彩和饱和度信息的彩色时变图像光流场计算方法.该方法不仅可以避免RGB颜色模型带来的线性相关问题,而且更符合生物视觉原理本文用HSV和HLS两种颜色模型计算彩色时变图像的光流场,实验结果表明该算法能够得到较好的计算结果.  相似文献   

2.
一种基于光流场重建三维运动和结构的新方法   总被引:3,自引:0,他引:3       下载免费PDF全文
提出了一种基于稀疏光流场计算三维运动和结构的线性新方法 ,该方法综合视觉运动分析中的两类处理方法 ,选取图象中的角点作为特征点 ;并检测和跟踪图象序列中的角点 .记录检测到的角点在图象序列中的位移 ,在理论上证明了时变图象的光流场可以近似地用角点的位移场代替 ,从而得到时变图象的稀疏光流场 ;通过光流运动模型的建立 ,推导出由稀疏光流场重建三维物体运动和结构的线性方法 .通过用真实图象序列验证该算法 ,表明该算法取得了较好的效果  相似文献   

3.
机器人视觉中彩色时变图像光流场计算的综述   总被引:3,自引:0,他引:3  
陈震  高满屯  沈允文 《机器人》2001,23(6):559-562
本文对机器人视觉中彩色时变图像的光流场计算方法做了全面概述.彩色时变图像光 流场计算的基础是基本等式,列举了目前几种基于彩色图像光流场计算方法,并提出目前彩 色时变图像光流场计算中存在的几个问题,总结出今后彩色时变图像光流场计算的发展趋势 ,以期对彩色时变图像光流场的计算理论和方法提供较为具体的指导方案.  相似文献   

4.
在彩色图象传输、采集、存贮过程中,一般都采用对色度通道进行压缩的技术来减少图象占用的系统资源,在重建中,采用大面积着色的方法来重建图象。但这种方法会在图象的细节部分引入较大的误差,为了在实际应用中得到最佳的重建彩色图象,就需要研究在色度通道亚采样的情况下,如何利用了完备的数据有效地重建图象。针对CCD彩色图象的空间量化特点,提出了一种利用自然图象的约象条件和人眼视觉生理学特点重建CCD彩色图象的方法。该方法利用高分辩率的通道数据来获得图象的细节信息,并且利用图象各通道间的细节相似性(同变性)来获得低分辩率通道的图象细节。通过实验证明,该方法有效地提高了图象的清晰渡,消除了CCD图象重建中常见的伪彩和图象模糊等现象,可以用于数码相机或多光谱遥感图象处理等方面。  相似文献   

5.
图像分数维能够反映图象的纹理特征,是图象分割的重要依据。本文在对图象分数维计算方法分类的基础上,着重研究了基于小波变换的图象分数维计算方法,并与基于盒维数定义的差分计盒维数法进行对比,结果表明,通过该方法计算得到的图象分数维较准确。  相似文献   

6.
静止彩色图象压缩编码技术   总被引:1,自引:1,他引:0       下载免费PDF全文
近些年来,人们在继续研究灰度图象压缩编码技术的同时,越来越关注它们在彩色图象压缩编码技术中的推广使用。本文着重介绍用于静止彩色图象压缩编码中的几种主要技术的进展情况。  相似文献   

7.
中国图像工程:2001   总被引:2,自引:15,他引:2  
该文是关于中国图象工程的年度文献综述系列之七。为了使国内广大人事图象工程研究和图象技术应用的科技人员能够较全面地了解国内图象工程研究和发展的现状,以及能方便地查询有关文献,现从2001年国内15种有关图象工程重要中文期刊的共108多期上发表的近2300篇学术研究和技术应用文献中,选取出400多篇属于图象工程领域的文献,然后根据各文献的主要内容将其分入5大类,又进一步分入了21小类,并进行了统计和分析。根据统计分析结果可见,中国图象工程在2001年又有了许多新进展,如对成象技术的进一步重视,及其在知识产权保护、多传感器图象融合、高层语义信息等相关方面均形成了新的研究热点,而且图象技术的应用面也在不断拓宽。  相似文献   

8.
心脏是人体的重要器官,它的运动与其功能紧密相关,因此在心脏疾病的诊断中,对心脏的运动分析是十分必要的.现代医学成象技术为医生在心脏的整体解剖结构和功能结构等方面提供了全局性信息.针对心脏超声序列图象的特征,相邻两帧图象内移动不大,而其灰度相关性和位置相关性强的特点,本文根据二维序列图象的相关性来跟踪各帧像素点得到序列图象光流场,并与传统方法进行了比较.为了检测本算法的可行性,本文结合临床知识对仿真图象应用相关算法进行了定量分析并获得预期实验结果.  相似文献   

9.
彩色图象的联合分布表示及检索技术   总被引:4,自引:1,他引:3       下载免费PDF全文
随着图象数据的大量涌现,基于内容的图象检索技术已成为图象数据库领域的研究热点,在图象检索系统中,由于颜色直方图方法简单方便,所以它已成为CBIR系统中最常用的一种技术方法,然而,经典的颜色直方图方法存在诸多缺陷,例如它不能表示图象中的空间分布信息。为此,人们提出了直方图细化技术,即将图象的颜色分布表示扩充成为颜色和其他相关特征的联合分布。为了进一步提高图象检索能力,在分析图象特征的基础上,给出了两种加权直方图模型;其一是将图象的颜色分布和细节信号能量的分布集成到单个直方图之中;另一种模型是将图象颜色及其边界强度的联合分布集成到一个直方图中。这两种方法不仅保持了经典直方图简单方便的特点;同时又有效地将空间信息集成到直方图中,实验结果表明,这些加权直方图表示均具有较强的图象辨别能力。  相似文献   

10.
阐述了一种快速而高效的由视频图象或视频图象序列生成全景的配准方法,为了估计图象配准的校正参数,该方法计算伪运动矢量,这些伪运动矢量是光流在每一选定象素处的粗略估计,使用方法,实现了一个在低价PC上就能实时创建和显示全景图像的软件。  相似文献   

11.
廖彬  杜明辉  胡金龙 《计算机科学》2011,38(5):272-274,300
形变物体边界的准确定位是光流估计的难点之一,仅依靠改进光流算法收效甚微。提出了全自动生长分割,以准确提取形变运动物体,从而将视频分割结果与梯度彩色光流算法相结合来提高光流法对形变物体的检测准确法度  相似文献   

12.
Motion field and optical flow: qualitative properties   总被引:7,自引:0,他引:7  
It is shown that the motion field the 2-D vector field which is the perspective projection on the image plane of the 3-D velocity field of a moving scene, and the optical flow, defined as the estimate of the motion field which can be derived from the first-order variation of the image brightness pattern, are in general different, unless special conditions are satisfied. Therefore, dense optical flow is often ill-suited for computing structure from motion and for reconstructing the 3-D velocity field by algorithms which require a locally accurate estimate of the motion field. A different use of the optical flow is suggested. It is shown that the (smoothed) optical flow and the motion field can be interpreted as vector fields tangent to flows of planar dynamical systems. Stable qualitative properties of the motion field, which give useful informations about the 3-D velocity field and the 3-D structure of the scene, usually can be obtained from the optical flow. The idea is supported by results from the theory of structural stability of dynamical systems  相似文献   

13.
In an infrared surveillance system (which must detect remote sources and thus has a very low resolution) in an aerospace environment, the estimation of the cloudy sky velocity should lower the false alarm rate in discriminating the motion between various moving shapes by means of a background velocity map. The optical flow constraint equation, based on a Taylor expansion of the intensity function, is often used to estimate the motion for each pixel. One of the main problems in motion estimation is that, for one pixel, the real velocity cannot be found because of the aperture problem. Another kinematic estimation method is based on a matched filter [generalized Hough transform (GHT)]: it gives a global velocity estimation for a set of pixels. On the one hand we obtain a local velocity estimation for each pixel with little credibility because the optical flow is so sensitivity to noise; on the other hand, we obtain a robust global kinematic estimation, the same for all selected pixels. This paper aims to adapt and improve the GHT in our typical application in which one must discern the global movement of objects (clouds), whatever their form may be (clouds with hazy edges or distorted shapes or even clouds that have very little structure). We propose an improvement of the GHT algorithm by segmentation images with polar constraints on spatial gradients. One pixel, at timet, is matched with another one at timet + T, only if the direction and modulus of the gradient are similar. This technique, which is very efficient, sharpens the peak and improves the motion resolution. Each of these estimations is calculated within windows belonging to the image, these windows being selected by means of an entropy criterion. The kinematic vector is computed accurately by means of the optical flow constraint equation applied on the displaced window. We showed that, for small displacements, the optical flow constraint equation sharpens the results of the GHT. Thus a semi-dense velocity field is obtained for cloud edges. A velocity map computed on real sequences with these methods is shown. In this way, a kinematic parameter discriminates between a target and the cloudy background.  相似文献   

14.
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.  相似文献   

15.
The estimation of dense velocity fields from image sequences is basically an ill-posed problem, primarily because the data only partially constrain the solution. It is rendered especially difficult by the presence of motion boundaries and occlusion regions which are not taken into account by standard regularization approaches. In this paper, the authors present a multimodal approach to the problem of motion estimation in which the computation of visual motion is based on several complementary constraints. It is shown that multiple constraints can provide more accurate flow estimation in a wide range of circumstances. The theoretical framework relies on Bayesian estimation associated with global statistical models, namely, Markov random fields. The constraints introduced here aim to address the following issues: optical flow estimation while preserving motion boundaries, processing of occlusion regions, fusion between gradient and feature-based motion constraint equations. Deterministic relaxation algorithms are used to merge information and to provide a solution to the maximum a posteriori estimation of the unknown dense motion field. The algorithm is well suited to a multiresolution implementation which brings an appreciable speed-up as well as a significant improvement of estimation when large displacements are present in the scene. Experiments on synthetic and real world image sequences are reported  相似文献   

16.
Dynamic pattern analysis and motion extraction can be efficiently addressed using optical flow techniques. This article presents a generalization of these questions to non-flat surfaces, where optical flow is tackled through the problem of evolution processes on non-Euclidian domains. The classical equations of optical flow in the Euclidian case are transposed to the theoretical framework of differential geometry. We adopt this formulation for the regularized optical flow problem, prove its mathematical well-posedness and combine it with the advection equation. The optical flow and advection problems are dual: a motion field may be retrieved from some scalar evolution using optical flow; conversely, a scalar field may be deduced from a velocity field using advection. These principles are illustrated with qualitative and quantitative evaluations from numerical simulations bridging both approaches. The proof-of-concept is further demonstrated with preliminary results from time-resolved functional brain imaging data, where organized propagations of cortical activation patterns are evidenced using our approach.  相似文献   

17.
In motion estimation, illumination change is always a troublesome obstacle, which often causes severely performance reduction of optical flow computation. The essential reason is that most of estimation methods fail to formalize a unified definition in color or gradient domain for diverse environmental changes. In this paper, we propose a new solution based on deep convolutional networks to solve the key issue. Our idea is to train deep convolutional networks to represent the complex motion features under illumination change, and further predict the final optical flow fields. To this end, we construct a training dataset of multi-exposure image pairs by performing a series of non-linear adjustments in the traditional datasets of optical flow estimation. Our multi-exposure flow networks (MEFNet) model consists of three main components: low-level feature network, fusion feature network, and motion estimation network. The former two components belong to the contracting part of our model in order to extract and represent the multi-exposure motion features; the third component is the expanding part of our model in order to learn and predict the high-quality optical flow. Compared with many state-of-the-art methods, our motion estimation method can eliminate the obstacle of illumination change and yield optical flow results with competitive accuracy and time efficiency. Moreover, the good performance of our model is also demonstrated in some multi-exposure video applications, like HDR (high dynamic range) composition and flicker removal.  相似文献   

18.
19.
This contribution focuses on different topics that are covered by the special issue titled “Real-Time Motion Estimation for image and video processing applications” and which incorporate GPUS, FPGAs, VLSI systems, DSPs, and Multicores, among other platforms. The guest editors have solicited original contributions, which address a wide range of theoretical and practical issues related to high-performance motion estimation image processing including, but not limited to: real-time matching motion estimation systems, real-time energy-based motion estimation systems, gradient-based motion estimation systems, optical flow estimation systems, color motion estimation systems, multi-scale motion estimation systems, optical flow and motion estimation systems, analysis or comparison of specialized architectures for motion estimation systems and real-world applications.  相似文献   

20.
This paper presents a compressed-domain motion object extraction algorithm based on optical flow approximation for MPEG-2 video stream. The discrete cosine transform (DCT) coefficients of P and B frames are estimated to reconstruct DC + 2AC image using their motion vectors and the DCT coefficients in I frames, which can be directly extracted from MPEG-2 compressed domain. Initial optical flow is estimated with Black’s optical flow estimation framework, in which DC image is substituted by DC + 2AC image to provide more intensity information. A high confidence measure is exploited to generate dense and accurate motion vector field by removing noisy and false motion vectors. Global motion estimation and iterative rejection are further utilized to separate foreground and background motion vectors. Region growing with automatic seed selection is performed to extract accurate object boundary by motion consistency model. The object boundary is further refined by partially decoding the boundary blocks to improve the accuracy. Experimental results on several test sequences demonstrate that the proposed approach can achieve compressed-domain video object extraction for MPEG-2 video stream in CIF format with real-time performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号