首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a motion estimation system that uses stereo image pairs as the input data. To perform experimental work, we also obtain a sequence of outdoor stereo images taken by two metric cameras. The system consists of four main stages, which are (1) determination of point correspondences on the stereo images, (2) correction of distortions in image coordinates, (3) derivation of 3D point coordinates from 2D correspondences, and (4) estimation of motion parameters based on 3D point correspondences. For the first stage of the system, we use a four-way matching algorithm to obtain matched point on two stereo image pairs at two consecutive time instants (ti and ti + 1). Since the input data are stereo images taken by cameras, it has two types of distortions, which are (i) film distortion and (ii) lens distortion. These two distortions must be corrected before any process can be applied on the matched points. To accomplish this goal, we use (i) bilinear transform for film distortion correction and (ii) lens formulas for lens distortion correction. After correcting the distortions, the results are 2D coordinates of each matched point that can be used to derive 3D coordinates. However, due to data noise, the calculated 3D coordinates to not usually represent a consistent rigid structure that is suitable for motion estimation; therefore, we suggest a procedure to select good 3D point sets as the input for motion estimation. The procedure exploits two constraints, rigidity between different time instants and uniform point distribution across the object on the image. For the last stage, we use an algorithm to estimate the motion parameters. We also wish to know what is the effect of quantization error on the estimated results; therefore an error analysis based on quantization error is performed on the estimated motion parameters. In order to test our system, eight sets of stereo image pairs are extracted from an outdoor stereo image sequence and used as the input data. The experimental results indicate that the proposed system does provide reasonable estimated motion parameters.  相似文献   

2.
Microlens arrays for integral imaging system   总被引:1,自引:0,他引:1  
Arai J  Kawai H  Okano F 《Applied optics》2006,45(36):9066-9078
When designing a system capable of capturing and displaying 3D moving images in real time by the integral imaging (II) method, one challenge is to eliminate pseudoscopic images. To overcome this problem, we propose a simple system with an array of three convex lenses. First, the lateral magnification of the elemental optics and the expansion of an elemental image is described by geometrical optics, confirming that the elemental optics satisfies the conditions under which pseudoscopic images can be avoided. In using the II method, adjacent elemental images must not overlap, a condition also satisfied by the proposed optical system. Next, an experiment carried out to acquire and display 3D images is described. The real-time system we have constructed comprises an elemental optics array with 54 H x 59 V elements, a CCD camera to capture a group of elemental images created by the lens array, and a liquid crystal panel to display these images. The results of the experiment confirm that the system produces orthoscopic images in real time, and thus is effective for real-time application of the II method.  相似文献   

3.
Kim SC  Hwang DC  Lee DH  Kim ES 《Applied optics》2006,45(22):5669-5676
A novel method of using stereoscopic video images to synthesize the computer-generated hologram (CGH) patterns of a real 3D object is proposed. Stereoscopic video images of a real 3D object are captured by a 3D camera system. Disparity maps between the captured stereo image pairs are estimated and from these estimated maps the depth data for each pixel of the object can be extracted on a frame basis. By using these depth data and original color images, hologram patterns of a real object can be computationally generated. In experiments, stereoscopic video images of a real 3D object, a wooden rhinoceros doll, are captured by using the Wasol 3D adapter system and its depth data are extracted from them. Then, CGH patterns of 1280 pixels x 1024 pixels are generated with these depth-annotated images of the wooden rhinoceros doll, and the CGH patterns are experimentally displayed via a holographic display system.  相似文献   

4.
Volumetric ultrasound imaging using 2-D CMUT arrays   总被引:5,自引:0,他引:5  
Recently, capacitive micromachined ultrasonic transducers (CMUTs) have emerged as a candidate to overcome the difficulties in the realization of 2-D arrays for real-time 3-D imaging. In this paper, we present the first volumetric images obtained using a 2-D CMUT array. We have fabricated a 128 x 128-element 2-D CMUT array with through-wafer via interconnects and a 420-microm element pitch. As an experimental prototype, a 32 x 64-element portion of the 128 x 128-element array was diced and flip-chip bonded onto a glass fanout chip. This chip provides individual leads from a central 16 x 16-element portion of the array to surrounding bondpads. An 8 x 16-element portion of the array was used in the experiments along with a 128-channel data acquisition system. For imaging phantoms, we used a 2.37-mm diameter steel sphere located 10 mm from the array center and two 12-mm-thick Plexiglas plates located 20 mm and 60 mm from the array. A 4 x 4 group of elements in the middle of the 8 x 16-element array was used in transmit, and the remaining elements were used to receive the echo signals. The echo signal obtained from the spherical target presented a frequency spectrum centered at 4.37 MHz with a 100% fractional bandwidth, whereas the frequency spectrum for the echo signal from the parallel plate phantom was centered at 3.44 MHz with a 91% fractional bandwidth. The images were reconstructed by using RF beamforming and synthetic phased array approaches and visualized by surface rendering and multiplanar slicing techniques. The image of the spherical target has been used to approximate the point spread function of the system and is compared with theoretical expectations. This study experimentally demonstrates that 2-D CMUT arrays can be fabricated with high yield using silicon IC-fabrication processes, individual electrical connections can be provided using through-wafer vias, and flip-chip bonding can be used to integrate these dense 2-D arrays with electronic circuits for practical 3-D imaging applications.  相似文献   

5.
We investigate the relationships that exist between the three-dimensional structure and kinematics of a line moving rigidly in space and the two-dimensional structure and kinematics (motion field) of its image in one or two cameras. We establish the fundamental equations that relate its three-dimensional motion to its observed image motion. We show how this motion field can be estimated from a line-based token tracker. We then assume that stereo matches have been established between image segments and show how the estimation of the motion field in the two images can be used to compute part of the kinematic screw of the corresponding 3D line. The equations are linear and if several lines belong to the same object provide a very simple way to estimate the full kinematic screw of that object. Finally, we show how the motion field can constrain the stereo matches by establishing necessary conditions that must be satisfied by the motion field of segments which are images of lines belonging to the same object. Only part of this theory has been implemented yet. This part uses Kalman filtering. Several experimental results using synthetic and real data are presented.  相似文献   

6.
Erdmann L  Gabriel KJ 《Applied optics》2001,40(31):5592-5599
We suggest what we believe is a new three-dimensional (3-D) camera system for integral photography. Our method enables high-resolution 3-D imaging. In contrast to conventional integral photography, a moving microlens array (MLA) and a low-resolution camera are used. The intensity distribution in the MLA image plane is sampled sequentially by use of a pinhole array. The inversion problem from pseudoscopic to orthoscopic images is dealt with by electronic means. The new method is suitable for real-time 3-D imaging. We verified the new method experimentally. Integral photographs with a resolution of 3760 pixels x 2560 pixels (188 x 128 element images) are presented.  相似文献   

7.
Matoba O  Tajahuerce E  Javidi B 《Applied optics》2001,40(20):3318-3325
A novel system for recognizing three-dimensional (3D) objects by use of multiple perspectives imaging is proposed. A 3D object under incoherent illumination is projected into an array of two-dimensional (2D) elemental images by use of a microlens array. Each elemental 2D image corresponds to a different perspective of the 3D object. Multiple perspectives imaging based on integral photography has been used for 3D display. In this way, the whole set of 2D elemental images records 3D information about the input object. After an optical incoherent-to-coherent conversion, an optical processor is employed to perform the correlation between the input and the reference 3D objects. Use of micro-optics allows us to process the 3D information in real time and with a compact optical system. To the best of our knowledge this 3D processor is the first to apply the principle of integral photography to 3D image recognition. We present experimental results obtained with both a digital and an optical implementation of the system. We also show that the system can recognize a slightly out-of-plane rotated 3D object.  相似文献   

8.
In this article, we develop a new method for image matching of any two images with arbitrary orientations. The idea comes from the workpiece localization in machining industry. We first describe an image as a 3D point set other than the common 2D function f(x, y), then, making the sets corresponding to the compared images form solid surfaces, we equivalently translate the matching problem into an optimization problem on the Lie group SE(3). Through developing a kind of steepest descent algorithms on a general Lie group, we present an practical algorithm for matching problem. Simulations of eye detection and face detection are presented to show the feasibility and efficiency of the proposed algorithm. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 245–252, 2010.  相似文献   

9.
Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step. In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion. There are two advantages to this approach. First, the method can be applied to a wide range of 3D cluttered scenes, including those for which pointwise image velocities cannot be measured because only normal velocity information is available. Second, the egomotion estimates can be used as a posterior constraint on estimating pointwise image velocities, since known egomotion parameters constrain the candidate image velocities at each point to a one-dimensional rather than a two-dimensional space.  相似文献   

10.
Abstract

This paper will study the restoration of a single image blurred by radial motion with space‐variant line spread functions (SVLSFs). First of all, models of SVLSFs are derived including the focus of expansion or contraction with constant velocity, acceleration, and deceleration. Then, the rectangular‐to‐polar lattice transformation is discussed to simplify the restoration process. Finally, we demonstrate the restoration of such motion‐blurred images generated by the computer or taken from real scenery to validate the proposed methods.  相似文献   

11.
Hsung TC  Lun DP  Ng WW 《Applied optics》2011,50(21):3973-3986
In optical phase shift profilometry (PSP), parallel fringe patterns are projected onto an object and the deformed fringes are captured using a digital camera. It is of particular interest in real time three-dimensional (3D) modeling applications because it enables 3D reconstruction using just a few image captures. When using this approach in a real life environment, however, the noise in the captured images can greatly affect the quality of the reconstructed 3D model. In this paper, a new image enhancement algorithm based on the oriented two-dimenional dual-tree complex wavelet transform (DT-CWT) is proposed for denoising the captured fringe images. The proposed algorithm makes use of the special analytic property of DT-CWT to obtain a sparse representation of the fringe image. Based on the sparse representation, a new iterative regularization procedure is applied for enhancing the noisy fringe image. The new approach introduces an additional preprocessing step to improve the initial guess of the iterative algorithm. Compared with the traditional image enhancement techniques, the proposed algorithm achieves a further improvement of 7.2 dB on average in the signal-to-noise ratio (SNR). When applying the proposed algorithm to optical PSP, the new approach enables the reconstruction of 3D models with improved accuracy from 6 to 20 dB in the SNR over the traditional approaches if the fringe images are noisy.  相似文献   

12.
Piezoelectric materials have dominated the ultrasonic transducer technology. Recently, capacitive micromachined ultrasonic transducers (CMUTs) have emerged as an alternative technology offering advantages such as wide bandwidth, ease of fabricating large arrays, and potential for integration with electronics. The aim of this paper is to demonstrate the viability of CMUTs for ultrasound imaging. We present the first pulse-echo phased array B-scan sector images using a 128-element, one-dimensional (1-D) linear CMUT array. We fabricated 64- and 128-element 1-D CMUT arrays with 100% yield and uniform element response across the arrays. These arrays have been operated in immersion with no failure or degradation in performance over the time. For imaging experiments, we built a resolution test phantom roughly mimicking the attenuation properties of soft tissue. We used a PC-based experimental system, including custom-designed electronic circuits to acquire the complete set of 128 x 128 RF A-scans from all transmit-receive element combinations. We obtained the pulse-echo frequency response by analyzing the echo signals from wire targets. These echo signals presented an 80% fractional bandwidth around 3 MHz, including the effect of attenuation in the propagating medium. We reconstructed the B-scan images with a sector angle of 90 degrees and an image depth of 210 mm through offline processing by using RF beamforming and synthetic phased array approaches. The measured 6-dB lateral and axial resolutions at 135 mm depth were 0.0144 radians and 0.3 mm, respectively. The electronic noise floor of the image was more than 50 dB below the maximum mainlobe magnitude. We also performed preliminary investigations on the effects of crosstalk among array elements on the image quality. In the near field, some artifacts were observable extending out from the array to a depth of 2 cm. A tail also was observed in the point spread function (PSF) in the axial direction, indicating the existence of crosstalk. The relative amplitude of this tail with respect to the mainlobe was less than -20 dB.  相似文献   

13.
Animals rotate their eyes to gaze at the target prey, enhancing the ability of measuring the distance to the target precisely for catching it. These animals, visual tracking includes the triangular eye-vergence control and their body's motion control by visual servoing. The research aims to realize a bionic robot tracking performance, in which the body links moves together with eyes' view orientation. This paper proposed a hand & eye-vergence dual control system which included two feedback loops: an outer loop for conventional visual servoing to direct a manipulator toward a target object and an inner loop for active motion control of binocular cameras to change the viewpoint along with the moving object to give an accurate and broad observation. This research also foused on how to compensate a fictional motion of the target seen by camera images in an eye-in-hand system, where the camera was fixed on the end-effector and moved together with the hand motion. A robust motion-feedforward (MFF) recognition method is proposed to compensate the fictional motion of the target based on the manipulator's joint velocity, then the real motion of the target seen by camera images is extracted, which can improve the feedback image sensing unit to make the whole servoing system dynamically stable. The effectiveness of the proposed hand & eye-vergence visual servoing method is shown by tracking experiments using a 6-DoF robot manipulator and a 3-DoF binocular vision system.  相似文献   

14.
Dragoman D  Dragoman M 《Applied optics》2003,42(8):1515-1519
We show that an array of optically actuated biased cantilevers can work as an optical data storage, able to encode data stored as arrays of optical pixels (images). Each of these optical pixels can, in addition, have a predetermined pixel depth, expressed as a certain number of gray levels. This new optical memory is able to work at a data rate of approximately 7 GB/s for an image with 128 x 128 pixels.  相似文献   

15.
This work presents and implements a CMOS real-time focal-plane motion sensor intended to detect the global motion, using the bipolar junction transistor (BJT)-based retinal smoothing network and the modified correlation-based algorithm. In the proposed design, the BJT-based retinal photoreceptor and smoothing network are adopted to acquire images and enhance the contrast of an image while the modified correlation-based algorithm is used in signal processing to determine the velocity and direction of the incident image. The deviations of the calculated velocity and direction for different image patterns are greatly reduced by averaging the correlated output over 16 frame-sampling periods. The proposed motion sensor includes a 32/spl times/32 pixel array with a pixel size of 100/spl times/100 /spl mu/m/sup 2/. The fill factor is 11.6% and the total chip area is 4200/spl times/4000 /spl mu/m/sup 2/. The DC power consumption is 120 mW at 5 V in the dark. Experimental results have successfully confirmed that the proposed motion sensor can work with different incident images and detect a velocity between 1 pixel/s and 140,000 pixels/s via controlling the frame-sampling period. The minimum detectable displacement in a frame-sampling period is 5 /spl mu/m. Consequently, the proposed high-performance new motion sensor can be applied to many real-time motion detection systems.  相似文献   

16.
A survey is given of stereology-based two-dimensional (2D) and three-dimensional (3D) approaches to shape assessment of embedded or non-embedded particles. Firstly, the development is outlined of global parameters characterizing geometric structure of cementitious materials. Secondly, these parameters are combined to yield effective shape estimators to be applied to section images of embedded particles or to projection images of non-embedded particles. This application can be just 2D, denoted as quantitative image analysis, giving information on what is displayed of the particle(s) in the section or projection image plane only. However, the researcher should strive for geometrical–statistical (stereological) extrapolation of the 2D observations to the real world’s third dimension. This is demonstrated superior over the 2D approach, however, requires a careful sampling strategy for providing representative information on structure.  相似文献   

17.
在线相位测量轮廓术(PMP)中,当被测物体运动速度较高时,所采集的变形条纹往往为运动模糊图像,使得复原误差增大,严重时可能导致三维重建无法进行。将在线PMP运用于钢轨外形及表面缺陷的在线三维测量时,为了实现钢轨表面模糊变形条纹的清晰化,本文对维纳滤波法、点扩散函数算法、盲解卷积算法和Richardson-Lucy算法等几种模糊图像复原算法进行了对比分析,用峰值信噪比对模糊条纹图像的复原效果进行评估。同时,研究了车辆运行速度和图像复原效果之间的关系,得出了复原效果与运行速度之间的关系曲线,进行了误差分析,并用在线PMP实现了钢轨外形的三维重建。理论及实验结果表明:在对钢轨外形轮廓及表面缺陷的在线三维测量时,Richardson-Lucy算法的复原效果最佳,图像复原程度与车辆运行速度呈多项式关系。  相似文献   

18.
Avrin A  Stern A  Kopeika NS 《Applied optics》2006,45(23):5950-5959
We present an algorithm to realign images distorted by motion and vibrations captured in cameras that use a scanning vector sensor with an interlaced scheme. In particular, the method is developed for images captured by a staggered time delay and integration camera distorted by motion. The algorithm improves the motion-distorted image by adjusting its fields irrespective of the type of motion that occurs during the exposure. The algorithm performs two tasks: estimation of the field relative motion during the exposure by a normal least-squares estimation technique and improvement of the degraded image from such motion distortion. The algorithm uses matrix computations; therefore it has a computation advantage over algorithms based on the technique of searching for a match. The algorithm is successfully demonstrated on both simulated and real images.  相似文献   

19.
Arai J  Okui M  Yamashita T  Okano F 《Applied optics》2006,45(8):1704-1712
We have developed an integral three-dimensional (3-D) television that uses a 2000-scanning-line video system that can shoot and display 3-D color moving images in real time. We had previously developed an integral 3-D television that used a high-definition television system. The new system uses -6 times as many elemental images [160 (horizontal) x 118 (vertical) elemental images] arranged at -1.5 times the density to improve further the picture quality of the reconstructed image. Through comparison an image near the lens array can be reconstructed at -1.9 times the spatial frequency, and the viewing angle is -1.5 times as wide.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号