首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Stereo is an important cue for visually guided robots. While moving around in the world, such a robot can use dynamic fixation to overcome limitations in image resolution and field of view. In this paper, a binocular stereo system capable of dynamic fixation is presented. The external calibration is performed continuously taking temporal consistency into consideration, greatly simplifying the process. The essential matrix, which is estimated in real-time, is used to describe the epipolar geometry. It will be shown, how outliers can be identified and excluded from the calculations. An iterative approach based on a differential model of the optical flow, commonly used in structure from motion, is also presented and tested towards the essential matrix. The iterative method will be shown to be superior in terms of both computational speed and robustness, when the vergence angles are less than about 15. For larger angles, the differential model is insufficient and the essential matrix is preferably used instead  相似文献   

2.
Epipolar geometry estimation is fundamental to many computer vision algorithms. It has therefore attracted a lot of interest in recent years, yielding high quality estimation algorithms for wide baseline image pairs. Currently many types of cameras such as smartphones produce geo-tagged images containing pose and internal calibration data. These include a GPS receiver, which estimates the position, a compass, accelerometers, and gyros, which estimate the orientation, and the focal length. Exploiting this information as part of an epipolar geometry estimation algorithm may be useful but not trivial, since the pose measurement may be quite noisy. We introduce SOREPP (Soft Optimization method for Robust Estimation based on Pose Priors), a novel estimation algorithm designed to exploit pose priors naturally. It sparsely samples the pose space around the measured pose and for a few promising candidates applies a robust optimization procedure. It uses all the putative correspondences simultaneously, even though many of them are outliers, yielding a very efficient algorithm whose runtime is independent of the inlier fraction. SOREPP was extensively tested on synthetic data and on hundreds of real image pairs taken by smartphones. Its ability to handle challenging scenarios with extremely low inlier fractions of less than 10% was demonstrated. It outperforms current state-of-the-art algorithms that do not use pose priors as well as others that do.  相似文献   

3.
We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n, independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.  相似文献   

4.
An efficient three-step search algorithm for block motion estimation   总被引:3,自引:0,他引:3  
The three-step search algorithm has been widely used in block matching motion estimation due to its simplicity and effectiveness. The sparsely distributed checking points pattern in the first step is very suitable for searching large motion. However, for stationary or quasistationary blocks it will easily lead the search to be trapped into a local minimum. In this paper we propose a modification on the three-step search algorithm which employs a small diamond pattern in the first step, and the unrestricted search step is used to search the center area. Experimental results show that the new efficient three-step search performs better than new three-step search in terms of MSE and requires less computation by up to 15% on average.  相似文献   

5.
6.
Motion estimation plays a vital role in reducing temporal correlation in video codecs but it requires high computational complexity. Different algorithms have tried to reduce this complexity. However these reduced-complexity routines are not as regular as the full search algorithm (FSA). Also, regularity of an algorithm is very important in order to have a hardware implementation of that algorithm even if it leads to more complexity burden. The goal of this paper is to develop an efficient and regular algorithm which mimics FSA by searching a small area exhaustively. Our proposed algorithm is designed based on two observations. The first observation is that the motion vector of a block falls within a specific rectangular area designated by the prediction vectors. The second observation is that in most cases, this rectangular area is smaller than one fourth of the FSA’s search area. Therefore, the search area of the proposed method is adaptively found for each block of a frame. To find the search area, the temporal and spatial correlations among motion vectors of blocks are exploited. Based on these correlations, a rectangular search area is determined and the best matching block in this area is selected. The proposed algorithm is similar to FSA in terms of regularity but requires less computational complexity due to its smaller search area. Also, the suggested algorithm is as simple as FSA in terms of implementation and is comparable with many of the existing fast search algorithms. Simulation results show the claimed performance and efficiency of the algorithm.  相似文献   

7.
以X油田勘探开发实际数据为基础,运用系统动力学方法建立了石油勘探开发系统动力学模型,并进行仿真模拟。结果表明,该模型能够真实反映石油勘探开发实际,用于预测可以取得良好效果,有效提高决策的预见性和科学性。  相似文献   

8.
Data reduction can improve the storage, transfer time, and processing requirements of very large data sets. One of the challenges of designing effective data reduction techniques is to be able to preserve the ability to use the reduced format directly for a wide range of database and data mining applications. We propose the novel idea of hierarchical subspace sampling in order to create a reduced representation of the data. The method is naturally able to estimate the local implicit dimensionalities of each point very effectively and, thereby, create a variable dimensionality reduced representation of the data. Such a technique is very adaptive about adjusting its representation depending upon the behavior of the immediate locality of a data point. An important property of the subspace sampling technique is that the overall efficiency of compression improves with increasing database size. Because of its sampling approach, the procedure is extremely fast and scales linearly both with data set size and dimensionality. We propose new and effective solutions to problems such as selectivity estimation and approximate nearest-neighbor search. These are achieved by utilizing the locality specific subspace characteristics of the data which are revealed by the subspace sampling technique.  相似文献   

9.
The control of a robot system using camera information is a challenging task regarding unpredictable conditions, such as feature point mismatch and changing scene illumination. This paper presents a solution for the visual control of a nonholonomic mobile robot in demanding real world circumstances based on machine learning techniques. A novel intelligent approach for mobile robots using neural networks (NNs), learning from demonstration (LfD) framework, and epipolar geometry between two views is proposed and evaluated in a series of experiments. A direct mapping from the image space to the actuator command is conducted using two phases. In an offline phase, NN–LfD approach is employed in order to relate the feature position in the image plane with the angular velocity for lateral motion correction. An online phase refers to a switching vision based scheme between the epipole based linear velocity controller and NN–LfD based angular velocity controller, which selection depends on the feature distance from the pre-defined interest area in the image. In total, 18 architectures and 6 learning algorithms are tested in order to find optimal solution for robot control. The best training outcomes for each learning algorithms are then employed in real time so as to discover optimal NN configuration for robot orientation correction. Experiments conducted on a nonholonomic mobile robot in a structured indoor environment confirm an excellent performance with respect to the system robustness and positioning accuracy in the desired location.  相似文献   

10.
This article proposes a high efficiency video coding (HEVC) standard hardware using block matching motion estimation algorithm. A hybrid parallel spiral and adaptive threshold star diamond search algorithm (Hyb PS-ATSDSA) proposes for fast motion estimation in HEVC. Parallel spiral search approach utilizes spiral pattern for searching from center to the surroundings and adaptive threshold SDA consists of two phases, they are adaptive threshold and star diamond algorithm. To lower computational complexities in HEVC architecture, parallel spiral search algorithm uses several blocks matching schemes. Adaptive threshold and star diamond algorithm are used to reduce the matching errors and remove the invalid blocks early from the procedure of motion estimation and finally predicts the final motion of the image. Speed is increased while using this hybrid algorithm. Proposed structure is carried out in Xilinx; ISE 14.5 design suit, then the experimental outcomes are analyzed to existing motion estimation strategies in field-programmable gate array devices. Experimental performance of the proposed Hyb-PS-ATSDSA-ME-HEVC method attains lower delay 33.97%, 32.97%, 62.97, and 26.97%, and lower area 34.867%, 45.97%, 27.97%, and 43.967% compared with the existing methods, such as FSA-ME-HEVC, TZSA-ME-HEVC, hyb TZSA-IME-HEVC, and IBMSA-ME-HEVC, respectively.  相似文献   

11.
Computational Visual Media - When searching for a dynamic target in an unknown real world scene, search efficiency is greatly reduced if users lack information about the spatial structure of the...  相似文献   

12.
We provide a sensor fusion framework for solving the problem of joint ego-motion and road geometry estimation. More specifically we employ a sensor fusion framework to make systematic use of the measurements from a forward looking radar and camera, steering wheel angle sensor, wheel speed sensors and inertial sensors to compute good estimates of the road geometry and the motion of the ego vehicle on this road. In order to solve this problem we derive dynamical models for the ego vehicle, the road and the leading vehicles. The main difference to existing approaches is that we make use of a new dynamic model for the road. An extended Kalman filter is used to fuse data and to filter measurements from the camera in order to improve the road geometry estimate. The proposed solution has been tested and compared to existing algorithms for this problem, using measurements from authentic traffic environments on public roads in Sweden. The results clearly indicate that the proposed method provides better estimates.  相似文献   

13.
A method is proposed for approximation of the classic edit distance between strings. The method is based on a mapping of strings into vectors belonging to a space with an easily calculable metric. The method preserves the closeness of strings and makes it possible to accelerate the computation of edit distances. The developed q-gram method of approximation of edit distances and its two randomized versions improves the approximation quality in comparison with well-known results. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 18–38, July–August 2007.  相似文献   

14.
为了平衡运动估计中搜索算法的复杂度与搜索精度,基于双模式算法的思想,提出一种结合基于改进的粒子群算法(PSO)和十字搜索算法(ARPS)的双模式运动搜索算法.该算法对不同运动程度的图像采用不同的运动搜索算法(运动剧烈时采用PSO算法,运动平缓时采用ARPS算法),有效地结合了PSO的全局性特点以及ARPS的局部性特点,同时保持了ARPS的快速性.实验表明,该算法的整体性能高于传统的单模式运动估计算法以及已有的多模式运动估计算法.  相似文献   

15.
In this paper an efficient procedure for calculating non-exceedance probabilities of the structural response is presented, with emphasis on structures modeled by large finite element systems with many uncertain parameters. This is a problem which receives considerable attention in numerous applications of engineering mechanics, such as space and aerospace engineering. For this purpose, a novel sampling procedure is introduced, which allows a significant reduction of the variance of the estimator of the probability of failure when compared to that of direct Monte Carlo simulation. This improvement in the computational efficiency is most important, as the computational efforts are much higher when uncertainties are considered.The only prerequisite for the application of this sampling procedure is an estimate of the gradient of the performance function of the structure. The calculation of the gradient is carried out efficiently, by exploiting the correlation between a randomly chosen input and the corresponding output of the system. The proposed concept is especially suited for high-dimensional problems in reliability engineering, e.g. for a rather large number n of random variables, say n > 100.To demonstrate the practical value of the methodology a reliability analysis of the INTEGRAL-satellite of the European Space Agency (ESA) has been performed. The results show that both for the frequency response analysis and the structural reliability analysis a substantial number of parameters of the finite element model play an important role.  相似文献   

16.
In this paper, we present an auxiliary algorithm, in terms of the speed of obtaining the optimal solution, that is effective in helping the simplex method for commencing a better initial basic feasible solution. The idea of choosing a direction towards an optimal point presented in this paper is new and easily implemented. From our experiments, the algorithm will release a corner point of the feasible region within few iterative steps, independent of the starting point. The computational results show that after the auxiliary algorithm is adopted as phase I process, the simplex method consistently reduce the number of required iterations by about 40%.Scope and purposeRecent progress in the implementations of simplex and interior point methods as well as advances in computer hardware has extended the capability of linear programming with today's computing technology. It is well known that the solution times for the interior point method improve with problem size. But, experimental evidence suggests that interior point methods dominate simplex-based methods only in the solution of very large scale linear programs. If the problem size is medium, how to combine the best features of these two methods to produce an effective algorithm for solving linear programming problems is still an interesting problem. In this research we present a new effective ε-optimality search direction based on the interior point method to start an initial basic feasible solution near the optimal point for the simplex method.  相似文献   

17.
We present an efficient search method for job-shop scheduling problems. Our technique is based on an innovative way of relaxing and subsequently reimposing the capacity constraints on some critical operations. We integrate this technique into a fast tabu search algorithm. Our computational results on benchmark problems show that this approach is very effective. Upper bounds for 11 well-known test problems are thus improved. Through the work presented We hope to move a step closer to the ultimate vision of an automated system for generating optimal or near-optimal production schedules. The peripheral conditions for such a system are ripe with the increasingly widespread adoption of enterprise information systems and plant floor tracking systems based on bar code or wireless technologies. One of the remaining obstacles, however, is the fact that scheduling problems arising from many production environments, including job-shops, are extremely difficult to solve. Motivated by recent success of local search methods in solving the job-shop scheduling problem, we propose a new diversification technique based on relaxing and subsequently reimposing the capacity constraints on some critical operations. We integrate this technique into a fast tabu search algorithm and are able to demonstrate its effectiveness through extensive computational experiments. In future research, we will consider other diversification techniques that are not restricted to critical operations.  相似文献   

18.
Similarity search is important in information retrieval applications where objects are usually represented as vectors of high dimensionality. This leads to the increasing need for supporting the indexing of high-dimensional data. On the other hand, indexing structures based on space partitioning are powerless because of the well-known “curse of dimensionality”. Linear scan of the data with approximation is more efficient in the high-dimensional similarity search. However, approaches so far have concentrated on reducing I/O, and ignored the computation cost. For an expensive distance function such as L p norm with fractional p, the computation cost becomes the bottleneck. We propose a new technique to address expensive distance functions by “indexing the function” by pre-computing some key values of the function once. Then, the values are used to develop the upper/lower bounds of the distance between a data vector and the query vector. The technique is extremely efficient since it avoids most of the distance function computations; moreover, it does not involve any extra secondary storage because no index is constructed and stored. The efficiency is confirmed by cost analysis, as well as experiments on synthetic and real data.  相似文献   

19.
20.
Indexing high-dimensional data for efficient in-memory similarity search   总被引:3,自引:0,他引:3  
In main memory systems, the L2 cache typically employs cache line sizes of 32-128 bytes. These values are relatively small compared to high-dimensional data, e.g., >32D. The consequence is that existing techniques (on low-dimensional data) that minimize cache misses are no longer effective. We present a novel index structure, called /spl Delta/-tree, to speed up the high-dimensional query in main memory environment. The /spl Delta/-tree is a multilevel structure where each level represents the data space at different dimensionalities: the number of dimensions increases toward the leaf level. The remaining dimensions are obtained using principal component analysis. Each level of the tree serves to prune the search space more efficiently as the lower dimensions can reduce the distance computation and better exploit the small cache line size. Additionally, the top-down clustering scheme can capture the feature of the data set and, hence, reduces the search space. We also propose an extension, called /spl Delta//sup +/-tree, that globally clusters the data space and then partitions clusters into small regions. The /spl Delta//sup +/-tree can further reduce the computational cost and cache misses. We conducted extensive experiments to evaluate the proposed structures against existing techniques on different kinds of data sets. Our results show that the /spl Delta//sup +/-tree is superior in most cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号