首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
This work presents a theory and methodology for simultaneous detection of local spatial and temporal scales in video data. The underlying idea is that if we process video data by spatio-temporal receptive fields at multiple spatial and temporal scales, we would like to generate hypotheses about the spatial extent and the temporal duration of the underlying spatio-temporal image structures that gave rise to the feature responses. For two types of spatio-temporal scale-space representations, (i) a non-causal Gaussian spatio-temporal scale space for offline analysis of pre-recorded video sequences and (ii) a time-causal and time-recursive spatio-temporal scale space for online analysis of real-time video streams, we express sufficient conditions for spatio-temporal feature detectors in terms of spatio-temporal receptive fields to deliver scale-covariant and scale-invariant feature responses. We present an in-depth theoretical analysis of the scale selection properties of eight types of spatio-temporal interest point detectors in terms of either: (i)–(ii) the spatial Laplacian applied to the first- and second-order temporal derivatives, (iii)–(iv) the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives, (v) the determinant of the spatio-temporal Hessian matrix, (vi) the spatio-temporal Laplacian and (vii)–(viii) the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix. It is shown that seven of these spatio-temporal feature detectors allow for provable scale covariance and scale invariance. Then, we describe a time-causal and time-recursive algorithm for detecting sparse spatio-temporal interest points from video streams and show that it leads to intuitively reasonable results. An experimental quantification of the accuracy of the spatio-temporal scale estimates and the amount of temporal delay obtained from these spatio-temporal interest point detectors is given, showing that: (i) the spatial and temporal scale selection properties predicted by the continuous theory are well preserved in the discrete implementation and (ii) the spatial Laplacian or the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives leads to much shorter temporal delays in a time-causal implementation compared to the determinant of the spatio-temporal Hessian or the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix.  相似文献   

2.
3.
Dansheng Yu  Yi Zhao  Ping Zhou 《Calcolo》2013,50(3):195-208
We introduce a type of modified truncations of approximate approximation with Gaussian kernels, which can be applied to approximate functions on compact intervals. Our results improve the related results of Chen and Cao (Appl Math Comput 217:725–734, 2010), Müller and Varnhorn (J Approx Theory 145:171–181, 2007). Also, we construct approximate approximation operators in multivariate cases and obtain the estimates of approximation rate.  相似文献   

4.
For remote sensing image registration, we find that affine transformation is suitable to describe the mapping between images. Based on the scale-invariant feature transform (SIFT), affine-SIFT (ASIFT) is capable of detecting and matching scale- and affine-invariant features. Unlike the blob feature detected in SIFT and ASIFT, a scale-invariant edge-based matching operator is employed in our new method. To find the local features, we first extract edges with a multi-scale edge detector, then the distinctive features (we call these ‘feature from edge’ or FFE) with computed scale are detected, and finally a new matching scheme is introduced for image registration. The algorithm incorporates principal component analysis (PCA) to ease the computational burden, and its affine invariance is embedded by discrete sampling as ASIFT. We present our analysis based on multi-sensor, multi-temporal, and different viewpoint images. The operator shows the potential to become a robust alternative for point-feature-based registration of remote-sensing images as subpixel registration consistency is achieved. We also show that using the proposed edge-based scale- and affine-invariant algorithm (EBSA) results in a significant speedup and fewer false matching pairs compared to the original ASIFT operator.  相似文献   

5.
Differential operators are essential in many image processing applications. Previous work has shown how to compute derivatives more accurately by examining the image locally, and by applying a difference operator which is optimal for each pixel neighborhood. The proposed technique avoids the explicit computation of fitting functions, and replaces the function fitting process by a function classification process using a filter bank of feature detection templates. Both the feature detectors and the optimal difference operators have a specific shape and an associated cost, defined by a rigid mathematical structure, which can be described by Gröbner bases. This paper introduces a cost criterion to select the operator of the best approximating function class and the most appropriate template size so that the difference operator can be locally adapted to the digitized function. We describe how to obtain discrete approximates for commonly used differential operators, and illustrate how image processing applications can benefit from the adaptive selection procedure for the operators by means of two example applications: tangent computation for digitized object boundaries and the Laplacian of Gaussian edge detector.  相似文献   

6.
We prove a general result on the exact asymptotics of the probability $P\left\{ {\int\limits_0^1 {\left| {\eta _\gamma (t)} \right|^p dt > u^p } } \right\}$ as u → ∞, where p > 0, for a stationary Ornstein-Uhlenbeck process η γ(t), i.e., a Gaussian Markov process with zero mean and with the covariance function Eηγ(tγ(s), t, s ∈ ?, γ > 0. We use the Laplace method for Gaussian measures in Banach spaces. Evaluation of constants is reduced to solving an extreme value problem for the rate function and studying the spectrum of a second-order differential operator of the Sturm-Liouville type. For p = 1 and p = 2, explicit formulas for the asymptotics are given.  相似文献   

7.
Sonya A.  Bryan W.  Madonna G.   《Pattern recognition》2005,38(12):2426-2436
The problem of scale is of fundamental interest in image processing, as the features that we visually perceive and find meaningful vary significantly depending on their size and extent. It is well known that the strength of a feature in an image may depend on the scale at which the appropriate detection operator is applied. It is also the case that many features in images exist significantly over a limited range of scales, and, of particular interest here, that the most salient scale may vary spatially over the feature. Hence, when designing feature detection operators, it is necessary to consider the requirements for both the systematic development and adaptive application of such operators over scale- and image-domains.

We present a new approach to the design of scalable derivative edge detectors, based on the finite element method, that addresses the issues of method and scale adaptability. The finite element approach allows us to formulate scalable image derivative operators that can be implemented using a combination of piecewise-polynomial and Gaussian basis functions. The issue of scale is addressed by partitioning the image in order to identify local key scales at which significant edge points may exist. This is achieved by consideration of empirically designed functions of local image variance.  相似文献   


8.
It is well known that the strength of a feature in an image may depend on the scale at which the appropriate detection operator is applied. It is also the case that many features in images exist significantly over a limited range of scales, and, of particular interest here, that the most salient scale may vary spatially over the feature. Hence, when designing feature detection operators, it is necessary to consider the requirements for both the systematic development and adaptive application of such operators over scale- and image-domains. We present an overview to the design of scalable derivative edge detectors, based on the finite element method, that addresses the issues of method and scale-adaptability. The finite element approach allows us to formulate scalable image derivative operators that can be implemented using a combination of piecewise-polynomial and Gaussian basis functions. The general adaptive technique may be applied to a range of operators. Here we evaluate the approach using image gradient operators, and we present comparative qualitative and quantitative results for both first and second order derivative methods.  相似文献   

9.
A compact discontinuous Galerkin method (CDG) is devised for nearly incompressible linear elasticity, through replacing the global lifting operator for determining the numerical trace of stress tensor in a local discontinuous Galerkin method (cf. Chen et al., Math Probl Eng 20, 2010) by the local lifting operator and removing some jumping terms. It possesses the compact stencil, that means the degrees of freedom in one element are only connected to those in the immediate neighboring elements. Optimal error estimates in broken energy norm, $H^1$ -norm and $L^2$ -norm are derived for the method, which are uniform with respect to the Lamé constant $\lambda .$ Furthermore, we obtain a post-processed $H(\text{ div})$ -conforming displacement by projecting the displacement and corresponding trace of the CDG method into the Raviart–Thomas element space, and obtain optimal error estimates of this numerical solution in $H(\text{ div})$ -seminorm and $L^2$ -norm, which are uniform with respect to $\lambda .$ A series of numerical results are offered to illustrate the numerical performance of our method.  相似文献   

10.
It has been established that the second-order stochastic gradient descent (SGD) method can potentially achieve generalization performance as well as empirical optimum in a single pass through the training examples. However, second-order SGD requires computing the inverse of the Hessian matrix of the loss function, which is prohibitively expensive for structured prediction problems that usually involve a very high dimensional feature space. This paper presents a new second-order SGD method, called Periodic Step-size Adaptation (PSA). PSA approximates the Jacobian matrix of the mapping function and explores a linear relation between the Jacobian and Hessian to approximate the Hessian, which is proved to be simpler and more effective than directly approximating Hessian in an on-line setting. We tested PSA on a wide variety of models and tasks, including large scale sequence labeling tasks using conditional random fields and large scale classification tasks using linear support vector machines and convolutional neural networks. Experimental results show that single-pass performance of PSA is always very close to empirical optimum.  相似文献   

11.
In this paper we propose mathematical optimizations to select the optimal regularization parameter for ridge regression using cross-validation. The resulting algorithm is suited for large datasets and the computational cost does not depend on the size of the training set. We extend this algorithm to forward or backward feature selection in which the optimal regularization parameter is selected for each possible feature set. These feature selection algorithms yield solutions with a sparse weight matrix using a quadratic cost on the norm of the weights. A naive approach to optimizing the ridge regression parameter has a computational complexity of the order $O(R K N^{2} M)$ with $R$ the number of applied regularization parameters, $K$ the number of folds in the validation set, $N$ the number of input features and $M$ the number of data samples in the training set. Our implementation has a computational complexity of the order $O(KN^3)$ . This computational cost is smaller than that of regression without regularization $O(N^2M)$ for large datasets and is independent of the number of applied regularization parameters and the size of the training set. Combined with a feature selection algorithm the algorithm is of complexity $O(RKNN_s^3)$ and $O(RKN^3N_r)$ for forward and backward feature selection respectively, with $N_s$ the number of selected features and $N_r$ the number of removed features. This is an order $M$ faster than $O(RKNN_s^3M)$ and $O(RKN^3N_rM)$ for the naive implementation, with $N \ll M$ for large datasets. To show the performance and reduction in computational cost, we apply this technique to train recurrent neural networks using the reservoir computing approach, windowed ridge regression, least-squares support vector machines (LS-SVMs) in primal space using the fixed-size LS-SVM approximation and extreme learning machines.  相似文献   

12.
In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal $L^2$ L 2 -error estimates are derived for semidiscrete approximations, when the initial condition is in $L^2$ L 2 . Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in $L^2,$ L 2 , which improves upon the results available in the literature.  相似文献   

13.
Vision-based fire detection is a challenging research area, since the visual features of fire dynamically change due to several factors such as weather conditions. In this paper, we propose a novel fire detection approach in which detected fire-candidate blobs are categorized as fire or non-fire under recursive Bayesian estimation. By employing the recursive estimation, we attempt to deal with fire characteristics that are dynamic as well as spatiotemporally continuous in a hidden Markov process. More specifically, for each detected fire-candidate blob, future beliefs about hidden classes are predicted and corrected by the most recent beliefs and observations of the blob. This is repeated during the lifetime of the blob. In this framework, to reduce the Bayes error in classification, we devised the greedy margin-maximizing clustering algorithm. This algorithm learns color clusters to model the feature space while attempting to maximize the in-cluster margins within a class and between classes. To further improve the detection accuracy, we developed two methods, $\epsilon $ -time delayed decision and on-line learning of transition probability. These were invented to suppress false alarms caused by temporary fire-like instances and to determine the current class by considering the majority of previous classification results. Experiments and comparative analyses with two contemporary approaches are conducted for various fire situations. The results show that the proposed approach is superior to the previous approaches in detecting fire and reducing false alarms. Furthermore, the proposed approach is shown to be competitive in applications to real environments.  相似文献   

14.
In this paper we present a high-order kernel method for numerically solving diffusion and reaction-diffusion partial differential equations (PDEs) on smooth, closed surfaces embedded in $\mathbb{R }^d$ . For two-dimensional surfaces embedded in $\mathbb{R }^3$ , these types of problems have received growing interest in biology, chemistry, and computer graphics to model such things as diffusion of chemicals on biological cells or membranes, pattern formations in biology, nonlinear chemical oscillators in excitable media, and texture mappings. Our kernel method is based on radial basis functions and uses a semi-discrete approach (or the method-of-lines) in which the surface derivative operators that appear in the PDEs are approximated using collocation. The method only requires nodes at “scattered” locations on the surface and the corresponding normal vectors to the surface. Additionally, it does not rely on any surface-based metrics and avoids any intrinsic coordinate systems, and thus does not suffer from any coordinate distortions or singularities. We provide error estimates for the kernel-based approximate surface derivative operators and numerically study the accuracy and stability of the method. Applications to different non-linear systems of PDEs that arise in biology and chemistry are also presented.  相似文献   

15.
When designing and developing scale selection mechanisms for generating hypotheses about characteristic scales in signals, it is essential that the selected scale levels reflect the extent of the underlying structures in the signal. This paper presents a theory and in-depth theoretical analysis about the scale selection properties of methods for automatically selecting local temporal scales in time-dependent signals based on local extrema over temporal scales of scale-normalized temporal derivative responses. Specifically, this paper develops a novel theoretical framework for performing such temporal scale selection over a time-causal and time-recursive temporal domain as is necessary when processing continuous video or audio streams in real time or when modelling biological perception. For a recently developed time-causal and time-recursive scale-space concept defined by convolution with a scale-invariant limit kernel, we show that it is possible to transfer a large number of the desirable scale selection properties that hold for the Gaussian scale-space concept over a non-causal temporal domain to this temporal scale-space concept over a truly time-causal domain. Specifically, we show that for this temporal scale-space concept, it is possible to achieve true temporal scale invariance although the temporal scale levels have to be discrete, which is a novel theoretical construction. The analysis starts from a detailed comparison of different temporal scale-space concepts and their relative advantages and disadvantages, leading the focus to a class of recently extended time-causal and time-recursive temporal scale-space concepts based on first-order integrators or equivalently truncated exponential kernels coupled in cascade. Specifically, by the discrete nature of the temporal scale levels in this class of time-causal scale-space concepts, we study two special cases of distributing the intermediate temporal scale levels, by using either a uniform distribution in terms of the variance of the composed temporal scale-space kernel or a logarithmic distribution. In the case of a uniform distribution of the temporal scale levels, we show that scale selection based on local extrema of scale-normalized derivatives over temporal scales makes it possible to estimate the temporal duration of sparse local features defined in terms of temporal extrema of first- or second-order temporal derivative responses. For dense features modelled as a sine wave, the lack of temporal scale invariance does, however, constitute a major limitation for handling dense temporal structures of different temporal duration in a uniform manner. In the case of a logarithmic distribution of the temporal scale levels, specifically taken to the limit of a time-causal limit kernel with an infinitely dense distribution of the temporal scale levels towards zero temporal scale, we show that it is possible to achieve true temporal scale invariance to handle dense features modelled as a sine wave in a uniform manner over different temporal durations of the temporal structures as well to achieve more general temporal scale invariance for any signal over any temporal scaling transformation with a scaling factor that is an integer power of the distribution parameter of the time-causal limit kernel. It is shown how these temporal scale selection properties developed for a pure temporal domain carry over to feature detectors defined over time-causal spatio-temporal and spectro-temporal domains.  相似文献   

16.
A combination method of Newton’s method and two-level piecewise linear finite element algorithm is applied for solving second-order nonlinear elliptic partial differential equations numerically. Newton’s method is to find a finite element solution by solving $m$ Newton equations on a fine mesh. The two-level Newton’s method solves $m-1$ Newton equations on a coarse mesh and processes one Newton iteration on a fine mesh. Moreover, the optimal error estimates of Newton’s method and the two-level Newton’s method are provided to justify the efficiency of the two-level Newton’s method. If we choose $H$ such that $h=O(|\log h|^{1-2/{p}}H^2)$ for the $W^{1,p}(\Omega )$ -error estimates, the two-level Newton’s method is asymptotically as accurate as Newton’s method on the fine mesh. Meanwhile, the numerical investigations provided a sufficient support for the theoretical analysis. Finally, these investigations also proved that the proposed method is efficient for solving the nonlinear elliptic problems.  相似文献   

17.
We give an explicit construction of a large subset \({S \subset \mathbb{F}^n}\) , where \({\mathbb{F}}\) is a finite field, that has small intersection with any affine variety of fixed dimension and bounded degree. Our construction generalizes a recent result of Dvir and Lovett (STOC 2012) who considered varieties of degree one (that is, affine subspaces).  相似文献   

18.
Feature Detection with Automatic Scale Selection   总被引:53,自引:4,他引:49  
The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a so-called scale-space representation. Traditional scale-space theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. This article proposes a systematic methodology for dealing with this problem. A framework is presented for generating hypotheses about interesting scale levels in image data, based on a general principle stating that local extrema over scales of different combinations of -normalized derivatives are likely candidates to correspond to interesting structures. Specifically, it is shown how this idea can be used as a major mechanism in algorithms for automatic scale selection, which adapt the local scales of processing to the local image structure.Support for the proposed approach is given in terms of a general theoretical investigation of the behaviour of the scale selection method under rescalings of the input pattern and by integration with different types of early visual modules, including experiments on real-world and synthetic data. Support is also given by a detailed analysis of how different types of feature detectors perform when integrated with a scale selection mechanism and then applied to characteristic model patterns. Specifically, it is described in detail how the proposed methodology applies to the problems of blob detection, junction detection, edge detection, ridge detection and local frequency estimation.In many computer vision applications, the poor performance of the low-level vision modules constitutes a major bottleneck. It is argued that the inclusion of mechanisms for automatic scale selection is essential if we are to construct vision systems to automatically analyse complex unknown environments.  相似文献   

19.
A $C^0$ -weak Galerkin (WG) method is introduced and analyzed in this article for solving the biharmonic equation in 2D and 3D. A discrete weak Laplacian is defined for $C^0$ functions, which is then used to design the weak Galerkin finite element scheme. This WG finite element formulation is symmetric, positive definite and parameter free. Optimal order error estimates are established for the weak Galerkin finite element solution in both a discrete $H^2$ norm and the standard $H^1$ and $L^2$ norms with appropriate regularity assumptions. Numerical results are presented to confirm the theory. As a technical tool, a refined Scott-Zhang interpolation operator is constructed to assist the corresponding error estimates. This refined interpolation preserves the volume mass of order $(k+1-d)$ and the surface mass of order $(k+2-d)$ for the $P_{k+2}$ finite element functions in $d$ -dimensional space.  相似文献   

20.
This paper presents an iris recognition system using automatic scale selection algorithm for iris feature extraction. The proposed system first filters the given iris image adopting a bank of Laplacian of Gaussian (LoG) filters with many different scales and computes the normalized response of every filter. The parameter γ used to normalize the filter responses, is derived by analyzing the scale-space maxima of the blob feature detector responses. Then the maxima normalized response over scales for each point are selected together as the optimal filter outputs of the given iris image and the binary codes for iris feature representation are achieved by encoding these optimal outputs through a zero threshold. Comparison experiment results clearly demonstrate an efficient performance of the proposed algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号