共查询到20条相似文献,搜索用时 15 毫秒
1.
SEUNG-YONG LEE KYUNG-YONG CHWA JAMES HAHN SUNG YONG SHIN 《Computer Animation and Virtual Worlds》1996,7(1):3-23
This paper presents a new image morphing method using a two-dimensional deformation technique which provides an intuitive model for a warp. The deformation technique derives aC1-continuous and one-to-one warp from a set of point pairs overlaid on two images. The resulting in-between image precisely reflects the correspondence of features specified by an animator. We also control the transition behaviour in a metamorphosis sequence by taking another deformable surface model, which is simpler and thus more efficient than the deformation technique for a warp. The proposed method separates transition control from feature interpolation and is easier to use than the previous techniques. The multigrid relaxation method is employed to solve a linear system in deriving a warp or transition rates. This method makes our image morphing technique fast enough for an interactive environment. 相似文献
2.
In this paper a new approach to motion analysis from stereo image sequences using unified temporal and spatial optical flow field (UOFF) is reported. That is, based on a four-frame rectangular model and the associated six UOFF field quantities, a set of equations is derived from which both position and velocity can be determined. It does not require feature extraction and correspondence establishment, which are known to be difficult, and only partial solutions suitable for simplistic situations have been developed. Furthermore, it is capable of detecting multiple moving objects even when partial occlusion occurs, and is potentially suitable for nonrigid motion analysis. Unlike the current existing techniques for motion analysis from stereo imagery, the recovered motion by using this new approach is for a whole continuous field instead of only for some features. It is a purely optical flow approach. Two experiments are presented to demonstrate the feasibility of the approach. 相似文献
3.
4.
A Stochastic Approach for Blurred Image Restoration and Optical Flow Computation on Field Image Sequence 总被引:2,自引:0,他引:2 下载免费PDF全文
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well. 相似文献
5.
当前比较流行的TV-L1光流算法,在不失精确度的前提下,能够利用双向求解机制来降低运算量,但无法有效地处理由间断、遮挡等因素造成的错误光流分量的缺陷。通过前向光流和后向光流的运动一致性理论来判断遮挡区域的光流分量,通过单调递减函数对遮挡区域进行处理,抑制了遮挡区域错误光流对邻域的扩散,提出了同帧邻域光流的横向修补和相邻帧光流的纵向修补。实验表明,该方法能够很好地处理遮挡情况,提高了光流的计算精度。 相似文献
6.
为了直观、精确地控制模型的形变,提出一种基于自定义四面体坐标系的三维变形计算方法.首先从几何上给出四面体坐标系的定义,阐述并证明了其关于几何变换的一些性质,使得拓扑变形易于实现,并可应用在三维变形技术中;然后描述了基于四面体坐标系的2种三维变形算法:嵌入式变形与基于特征的精确变形算法.通过多个变形实例结果证明,该方法能够有效地实现物体的变形以及物体间的渐变. 相似文献
7.
8.
Ralph Hartley 《Pattern recognition letters》1985,3(4):253-262
A method of image segmentation known as pyramid linking was developed several years ago for segmenting images into gray level subpopulations. This method is applied here to the segmentation of optical flow fields. The algorithm makes use of a hierarchical data structure, called a pyramid, in which an image has a stack of representations each of which has a coarser resolution than the one below. Each node constructs a model of the flow in the region of the image that it represents and then decides which of its fathers (on the level above) best represents this flow. 相似文献
9.
目的 针对复杂场景图像序列中运动直线特征的提取、跟踪问题,提出一种基于点、线光流预测机制的图像序列运动直线跟踪方法。方法 首先根据图像直线的表达式定义点、线光流基本约束方程,由基本约束方程推导出关于点光流与直线光流对应关系的3个重要推论。然后依据点、线光流对应关系,利用图像序列中直线特征上的像素点光流计算直线光流的估计值并根据直线光流阈值筛选图像序列运动直线。最后由筛选出的运动直线及直线光流估计值计算直线的预测坐标并在Hough域内进行跟踪匹配,得到图像序列运动直线跟踪结果。结果 通过合成及真实图像序列实验验证,本文方法能够准确地筛选出图像序列中感兴趣的运动直线,并对运动直线进行稳定地跟踪、匹配,直线跟踪结果未产生干扰直线的误匹配,直线跟踪时间消耗不超过12 s。结论 相对于传统的直线跟踪、匹配方法,本文方法具有较高地直线跟踪精度和较好的鲁棒性,更适用于复杂场景下的运动直线跟踪、匹配问题。 相似文献
10.
光流法是一种基于光流场模型的重要而有效的形变配准算法。针对现有光流法所用特征质量不高使得配准结果不够准确的问题,将深度卷积神经网络特征和光流法相结合,提出了基于深度卷积特征光流(DCFOF)的形变医学图像配准算法。首先利用深度卷积神经网络稠密地提取图像中每个像素所在图像块的深度卷积特征,然后基于固定图像和浮动图像间的深度卷积特征差异求解光流场。通过提取图像的更为精确和鲁棒的深度学习特征,使求得的光流场更接近真实形变场,提升了配准精度。实验结果表明,所提算法能够更有效地解决形变医学图像配准问题,其配准精度优于Demons算法、尺度不变特征变换(SIFT) Flow算法以及医学图像专业配准软件Elastix。 相似文献
11.
抽样分辨率达1米的高清卫星视频已经能够实现对地面较小的运动目标的实时监控。针对卫星视频中运动车辆目标仅显示为一个或几个像素点的特点,提出了一种基于光流法的卫星视频交通流参数提取的思路与方法。该方法利用卫星视频中车辆目标为像素点的特点,结合Shi-Tomasi角点检测方法实现车辆检测及车辆计数;在车辆检测的基础上利用光流法得到的连续视频帧中角点的位置信息进行双向车辆平均车速的计算,并对实验结果进行了对比分析。该文是基于卫星视频中小微运动车辆目标进行交通流参数提取的一次有益尝试。 相似文献
12.
V. N. Dvornychenko M. S. Kong S. M. Soria 《Journal of Mathematical Imaging and Vision》1992,2(1):27-38
Optical flow refers to the apparent motion of objects in the image plane, due to either camera or object motion. Applications of optical flow include robotics, image enhancement by means of frame integration, moving-target indication and passive navigation. The purpose of this paper is the simplest and clearest formulation of the dependence of optical flow on the mission and trajectory parameters. Once the canonical equations are established, their invertability is addressed: to what extent can mission parameters be obtained from optical flow? Analysis shows that there are eight independent mission parameters (excluding focal length), so there need to be exactly four relationships among the 12 flow coefficients. These are explicitly exhibited. It is then shown that the solution for the eight mission parameters in terms of the remaining eight independent coefficients hinges on a cubic resolvent. The roots of this resolvent are closely connected to the velocity-to-altitude ratio, and the solution can be constructed in terms of these. This solution generally turns out to be double valued with both sets of mission parameters producing identical optical flows. Fortunately, one of these values can usually be eliminated as inappropriate within the context. The recognition of the dual solution and its explicit equations are believed to be a new contribution. To make the paper more self-contained, algorithms, window selection, correlation measures, and data editing are covered. Much of this material has been previously published by others. The paper concludes with a discussion of the focus of expansion. It is shown that only camera-rotation-free parameters or their conjugates give rise to a focus of expansion. Explicit equations for these parameters are given. 相似文献
13.
结合运动目标检测帧差法运算速度快和光流法活动目标检测准确度高的特点,提出一种改进的帧间差光流场计算的运动目标检测算法。在帧差部分采用隔帧差分从而可以检测到帧间位移小于1个像元而多帧累积位移大于1个像元的运动点目标;在光流计算时,引入通用动态图像模型(GDIM)建立新的光流约束条件,克服了亮度变化引起的约束方程不成立问题。算法仅对帧差法后图像中不为零的像素进行光流场计算,提高了目标检测的准确性和检测速度。仿真实验证明了该算法的有效性。 相似文献
14.
针对柔性管道内段塞流引起的结构大变形流致振动问题,本文采用分区强流固耦合方法建立了面向大变形两相流输运管道的双向流固耦合数值计算模型.基于流体体积法对气液两相流动界面进行追踪并结合任意拉格朗日-欧拉(ALE)动网格方法考虑流体域网格变形,同时采用有限元方法建立了柔性管道动力学模型,根据流体和管道壁面的相互作用构建强流固耦合计算模型.研究表明,在两相流作用下柔性管道的振动主要以类似一阶和二阶振动模态响应为主且会发生模态切换;模态切换与管内的液塞长度、液塞流动频率以及气液塞在管内的轴向分布有关;管道的大变形振动促进了短气塞的融合并显著改变了液塞的长度和频率,进而影响管道的振动和流型转变界限. 相似文献
15.
《Behaviour & Information Technology》2012,31(1):32-45
Abstract Reformatting blocks of semi-structured information is a common editing task that typically involves highly repetitive action sequences, but ones where exceptional cases arise constantly and must be dealt with as they arise. This paper describes a procedural programming-by-example approach to repetitive text editing which allows users to construct programs within a standard editing interface and extend them incrementally. Following a brief practice period during which they settle on an editing strategy for the task at hand, users commence editing in the normal way. Once the first block of text has been edited, they inform the learning system which constructs a generalized procedure from the actions that have been recorded. The system then attempts to apply the procedure to the next block of text, by predicting editing actions and displaying them for confirmation. If the user accepts a prediction, the action is carried out (and the program may be generalized accordingly); otherwise the user is asked to intervene and supply additional information, in effect debugging the program on the fly. A pilot implementation is described that operates in a simple interactive point-and-click editor (Macintosh MINI-EDIT), along with its performance on three sample tasks. In one case the procedure was learned correctly from the actions on the first text block, while in the others minor debugging was needed on subsequent text blocks. In each case a much smaller number of both keystrokes and mouse-clicks was required than with normal editing, without the system making any prior assumptions about the stucture of the text except for some general knowledge about lexical patterns. Although a smooth interactive interface has not yet been constructed, the results obtained serve to indicate the potential of this approach for semi-structured editing tasks. 相似文献
16.
Reformatting blocks of semi-structured information is a common editing task that typically involves highly repetitive action sequences, but ones where exceptional cases arise constantly and must be dealt with as they arise. This paper describes a procedural programming-by-example approach to repetitive text editing which allows users to construct programs within a standard editing interface and extend them incrementally. Following a brief practice period during which they settle on an editing strategy for the task at hand, users commence editing in the normal way. Once the first block of text has been edited, they inform the learning system which constructs a generalized procedure from the actions that have been recorded. The system then attempts to apply the procedure to the next block of text, by predicting editing actions and displaying them for confirmation. If the user accepts a prediction, the action is carried out (and the program may be generalized accordingly); otherwise the user is asked to intervene and supply additional information, in effect debugging the program on the fly. A pilot implementation is described that operates in a simple interactive point-and-click editor (Macintosh MINI-EDIT), along with its performance on three sample tasks. In one case the procedure was learned correctly from the actions on the first text block, while in the others minor debugging was needed on subsequent text blocks. In each case a much smaller number of both keystrokes and mouse-clicks was required than with normal editing, without the system making any prior assumptions about the stucture of the text except for some general knowledge about lexical patterns. Although a smooth interactive interface has not yet been constructed, the results obtained serve to indicate the potential of this approach for semi-structured editing tasks. 相似文献
17.
Visual Simultaneous Localization and Mapping (visual SLAM) has attracted more and more researchers in recent decades and many state-of-the-art algorithms have been proposed with rather satisfactory performance in static scenarios. However, in dynamic scenarios, the performance of current visual SLAM algorithms degrades significantly due to the disturbance of the dynamic objects. To address this problem, we propose a novel method which uses optical flow to distinguish and eliminate the dynamic feature points from the extracted ones using RGB images as the only input. The static feature points are fed into the visual SLAM system for the camera pose estimation. We integrate our method with the original ORB-SLAM system and validate the proposed method with the challenging dynamic sequences from the TUM dataset and our recorded office dataset. The whole system can work in real time. Qualitative and quantitative evaluations demonstrate that our method significantly improves the performance of ORB-SLAM in dynamic scenarios. 相似文献
18.
Sagar Chhetri Abeer Alsadoon Thair Al‐Dala'in P. W. C. Prasad Tarik A. Rashid Angelika Maag 《Computational Intelligence》2021,37(1):578-595
Accurate fall detection for the assistance of older people is crucial to reduce incidents of deaths or injuries due to falls. Meanwhile, vision‐based fall detection system has shown some significant results to detect falls. Still, numerous challenges need to be resolved. The impact of deep learning has changed the landscape of the vision‐based system, such as action recognition. The deep learning technique has not been successfully implemented in vision‐based fall detection system due to the requirement of a large amount of computation power and requirement of a large amount of sample training data. This research aims to propose a vision‐based fall detection system that improves the accuracy of fall detection in some complex environments such as the change of light condition in the room. Also, this research aims to increase the performance of the pre‐processing of video images. The proposed system consists of Enhanced Dynamic Optical Flow technique that encodes the temporal data of optical flow videos by the method of rank pooling, which thereby improves the processing time of fall detection and improves the classification accuracy in dynamic lighting condition. The experimental results showed that the classification accuracy of the fall detection improved by around 3% and the processing time by 40–50 ms. The proposed system concentrates on decreasing the processing time of fall detection and improving the classification accuracy. Meanwhile, it provides a mechanism for summarizing a video into a single image by using dynamic optical flow technique, which helps to increase the performance of image preprocessing steps. 相似文献
19.
Visual Speech Synthesis by Morphing Visemes 总被引:6,自引:0,他引:6
We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face. 相似文献
20.
József Molnár Dmitry Chetverikov Sándor Fazekas 《Computer Vision and Image Understanding》2010,114(10):1104-1114
We address the problem of variational optical flow for video processing applications that need fast operation and robustness to drastic variations in illumination. Recently, a solution [1] has been proposed based on the photometric invariants of the dichromatic reflection model [2]. However, this solution is only applicable to colour videos with brightness variations. Greyscale videos, or colour videos with colour illumination changes cannot be adequately handled. We propose two illumination-robust variational methods based on cross-correlation that are applicable to colour and grey-level sequences and robust to brightness and colour illumination changes. First, we present a general implicit nonlinear scheme that assumes no particular analytical form of energy functional and can accommodate different image components and data metrics, including cross-correlation. We test the nonlinear scheme on standard synthetic data with artificial brightness and colour effects added and conclude that cross-correlation is robust to both kinds of illumination changes. Then we derive a fast linearised numerical scheme for cross-correlation based variational optical flow. We test the linearised algorithm on challenging data and compare it to a number of state-of-the-art variational flow methods. 相似文献