首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Unifying statistical texture classification frameworks   总被引:6,自引:0,他引:6  
The objective of this paper is to examine statistical approaches to the classification of textured materials from a single image obtained under unknown viewpoint and illumination. The approaches investigated here are based on the joint probability distribution of filter responses.

We review previous work based on this formulation and make two observations. First, we show that there is a correspondence between the two common representations of filter outputs—textons and binned histograms. Second, we show that two classification methodologies, nearest neighbour matching and Bayesian classification, are equivalent for particular choices of the distance measure. We describe the pros and cons of these alternative representations and distance measures, and illustrate the discussion by classifying all the materials in the Columbia-Utrecht (CUReT) texture database.

These equivalences allow us to perform direct comparisons between the texton frequency matching framework, best exemplified by the classifiers of Leung and Malik [Int. J. Comput. Vis. 43 (2001) 29], Cula and Dana [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2001) 1041], and Varma and Zisserman [Proceedings of the Seventh European Conference on Computer Vision 3 (2002) 255], and the Bayesian framework most closely represented by the work of Konishi and Yuille [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2000) 125].  相似文献   


2.
The aim of Content-based Image Retrieval (CBIR) is to find a set of images that best match the query based on visual features. Most existing CBIR systems find similar images in low level features, while Text-based Image Retrieval (TBIR) systems find images with relevant tags regardless of contents in the images. Generally, people are more interested in images with similarity both in contours and high-level concepts. Therefore, we propose a new strategy called Iterative Search to meet this requirement. It mines knowledge from the similar images of original queries, in order to compensate for the missing information in feature extraction process. To evaluate the performance of Iterative Search approach, we apply this method to four different CBIR systems (HOF Zhou et al. in ACM international conference on multimedia, 2012; Zhou and Zhang in Neural information processing—international conference, ICONIP 2011, Shanghai, 2011, HOG Dalal and Triggs in IEEE computer society conference on computer vision pattern recognition, 2005, GIST Oliva and Torralba in Int J Comput Vision 42:145–175, 2001 and CNN Krizhevsky et al. in Adv Neural Inf Process Syst 25:2012, 2012) in our experiments. The results show that Iterative Search improves the performance of original CBIR features by about \(20\%\) on both the Oxford Buildings dataset and the Object Sketches dataset. Meanwhile, it is not restricted to any particular visual features.  相似文献   

3.
Capturing the scene gist is account for rapid and accurate scene classification in human visual system. This paper presents a biologically inspired task oriented gist model (BT-Gist) that attempts to emulate two important attributes of biological gist: holistic scene centered spatial layout representation and task oriented resolution determination. For the first attribute, we enrich the model of Oliva and Torralba by refining the low-level features in several biological plausible ways, extending the spatial layout to multiple resolution and followed by perceptually meaningful manifold analysis for a set of multi-resolution biologically inspired intrinsic manifold spatial layouts (BMSLs). Since the optimal resolution that best represents the spatial layout varies from task to task, we embody the second attribute as learning the combination of BMSLs of multiple resolution with respect to their optimal discriminative invariance trade-off for the task at hand, and then cast it in the SVM based localized multiple kernel learning (LMKL) framework, by which the kernel of each scene gist is approximated as a local combination of kernels associated to multi-resolution BMSLs. By exploring the task specific category distribution pattern over BMSL, we define the local model as a category distribution sensitive (CDS) kernel, which can accommodate both the diverse individuality of specific BMSL and the universality shared within the whole category space. Via CDS-LMKL, both the optimal resolution for spatial layouts and the final classifier can be efficiently obtained in a joint manner. We evaluate BT-Gist on four natural scene databases and one cluttered indoor scene database with a range of comparison: From different MKL methods, to various biologically inspired models and BoF based computer vision models. CDS-LMKL leads to better results compared to several existing MKL algorithms. Given the two biological attributes that the framework has to follow, BT-Gist, despite its holistic nature, outperforms existing biologically inspired models and BoF based computer vision models in natural scene classification, and competes with the object segmentation based ROI-Gist in cluttered indoor scene classification.  相似文献   

4.
5.
In the current paper, a new serial algorithm for solving nearly penta-diagonal linear systems is presented. The computational cost of the algorithm is less than or almost equal to those of recent successful algorithms [J. Jia, Q. Kong, and T. Sogabe, A fast numerical algorithm for solving nearly penta-diagonal linear systems, Int. J. Comput. Math. 89 (2012), pp. 851–860; X.G. Lv and J. Le, A note on solving nearly penta-diagonal linear systems, Appl. Math. Comput. 204 (2008), pp. 707–712; S.N. Neossi Nguetchue and S. Abelman, A computational algorithm for solving nearly penta-diagonal linear systems, Appl. Math. Comput. 203 (2008), pp. 629–634]. Moreover, it is suitable for developing its parallel algorithms. One of the parallel algorithms is given and is shown to be promising. The implementation of the algorithms using Computer Algebra Systems such as MATLAB and MAPLE is straightforward. Two numerical examples are given in order to illustrate the validity and efficiency of our algorithms.  相似文献   

6.
In this paper we present a hierarchical and contextual model for aerial image understanding. Our model organizes objects (cars, roofs, roads, trees, parking lots) in aerial scenes into hierarchical groups whose appearances and configurations are determined by statistical constraints (e.g. relative position, relative scale, etc.). Our hierarchy is a non-recursive grammar for objects in aerial images comprised of layers of nodes that can each decompose into a number of different configurations. This allows us to generate and recognize a vast number of scenes with relatively few rules. We present a minimax entropy framework for learning the statistical constraints between objects and show that this learned context allows us to rule out unlikely scene configurations and hallucinate undetected objects during inference. A similar algorithm was proposed for texture synthesis (Zhu et al. in Int. J. Comput. Vis. 2:107–126, 1998) but didn’t incorporate hierarchical information. We use a range of different bottom-up detectors (AdaBoost, TextonBoost, Compositional Boosting (Freund and Schapire in J. Comput. Syst. Sci. 55, 1997; Shotton et al. in Proceedings of the European Conference on Computer Vision, pp. 1–15, 2006; Wu et al. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2007)) to propose locations of objects in new aerial images and employ a cluster sampling algorithm (C4 (Porway and Zhu, 2009)) to choose the subset of detections that best explains the image according to our learned prior model. The C4 algorithm can quickly and efficiently switch between alternate competing sub-solutions, for example whether an image patch is better explained by a parking lot with cars or by a building with vents. We also show that our model can predict the locations of objects our detectors missed. We conclude by presenting parsed aerial images and experimental results showing that our cluster sampling and top-down prediction algorithms use the learned contextual cues from our model to improve detection results over traditional bottom-up detectors alone.  相似文献   

7.
In reverse engineering CAD modeling, a facet model is usually constructed from a large point cloud data which are obtained from a surface scanning process. The number of points in the point cloud data may range from hundred thousands to several millions depending on the user-defined precision. As a result, the facet model becomes very ‘large’ in terms of number of facets or vertices. The computational effort required to manipulate such a large set of data becomes enormous. This effort is significant even for some simple operations, e.g. rotating, scaling and translation. In this paper, an algorithm is proposed to determine the extreme points in a large 3D point set along multiple directions. This algorithm uses a cylindrical grid approximation technique to give both approximate solution and exact solution. This algorithm can be used to accelerate the computational process of some geometric problems on a large model, e.g., the minimum bounding box of a facet model [Comput Aid Des 20 (1988) 506; Comput Struct 79I (2001) 1433; Int J Comput Inform Sci 14 (1985) 183] and the ‘fitness’ problem of a model into a bounded volume [Comput Aid Des 20 (1988) 506].  相似文献   

8.
《国际计算机数学杂志》2012,89(12):2186-2200
We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm given by Abdeljaoued et al. [J. Abdeljaoued, G. Diaz-Toca, and L. Gonzalez-Vega, Minors of Bezout matrices, subresultants and the parameterization of the degree of the polynomial greatest common divisor, Int. J. Comput. Math. 81 (2004), pp. 1223–1238]. We evaluate determinants of slightly manipulated Bezout matrices using the algorithm of Berkowitz. Although the algorithm gives worse complexity bounds than pseudo-division approaches, our experiments show that our approach is superior for input polynomials with moderate degrees if the ground domain contains indeterminates.  相似文献   

9.
10.
We describe an O(n 3/log n)-time algorithm for the all-pairs-shortest-paths problem for a real-weighted directed graph with n vertices. This slightly improves a series of previous, slightly subcubic algorithms by Fredman (SIAM J. Comput. 5:49–60, 1976), Takaoka (Inform. Process. Lett. 43:195–199, 1992), Dobosiewicz (Int. J. Comput. Math. 32:49–60, 1990), Han (Inform. Process. Lett. 91:245–250, 2004), Takaoka (Proc. 10th Int. Conf. Comput. Comb., Lect. Notes Comput. Sci., vol. 3106, pp. 278–289, Springer, 2004), and Zwick (Proc. 15th Int. Sympos. Algorithms and Computation, Lect. Notes Comput. Sci., vol. 3341, pp. 921–932, Springer, 2004). The new algorithm is surprisingly simple and different from previous ones. A preliminary version of this paper appeared in Proc. 9th Workshop Algorithms Data Struct. (WADS), Lect. Notes Comput. Sci., vol. 3608, pp. 318–324, Springer, 2005.  相似文献   

11.
《国际计算机数学杂志》2012,89(14):3196-3198
In [Y.l. Wang, T. Chaolu, Z. Chen, Using reproducing kernel for solving a class of singular weakly nonlinear boundary value problems, Int. J. Comput. Math. 87(2) (2010), pp. 367–380], we present three algorithms to solve a class of ordinary differential equations boundary value problems in reproducing kernel space. It is worth noting that our methods can get the solution of partial integro-differential equation. In this note, we use method 2 [M. Dehghan, Solution of a partial integro-differential equation arising from viscoelasticity, Int. J. Comput. Math. 83(1) (2006), pp. 123–129] to solve a class of partial integro-differential equation in reproducing kernel space. Numerical example shows our method is effective and has high accuracy.  相似文献   

12.
We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.  相似文献   

13.
In this paper, a new graph representation is proposed which is applicable to cable–membrane structures modelled using both one- and two-dimensional elements. The proposed graph representation is an engineering design approach and not based on a mathematically derived representation. The proposed graphs are partitioned using state-of-the-art tools, including METIS [METIS, a software package for partitioning unstructured graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices (1997); J Parallel Distribut Comput (1997)], and JOSTLE [Advances in computational mechanics with parallel and distributed processing (1997); Parallel dynamic graph-partitioning for unstructured meshes (1997); Int J High Perform Comput Appl 13 (1999) 334; Appl Math Model 25 (2000) 123]. The graph representation performs better than standard graph representations for those cases when the rules of geometric locality and uniform element distribution around nodes are violated. The relation of the proposed graph representation to the most advanced hyper-graph representation [IEEE Trans Parallel Distribut Syst 10 (1999) 673; Parallel Comput 26 (2000) 673] is also discussed.  相似文献   

14.
《Computers & Fluids》2005,34(4-5):593-615
The class of multidimensional upwind residual distribution (RD) schemes has been developed in the past decades as an attractive alternative to the finite volume (FV) and finite element (FE) approaches. Although they have shown superior performances in the simulation of steady two-dimensional and three-dimensional inviscid and viscous flows, their extension to the simulation of unsteady flow fields is still a topic of intense research [ICCFD2, International Conference on Computational Fluid Dynamics 2, Sydney, Australia, 15–19 July 2002; M. Mezine, R. Abgrall, Upwind multidimensional residual schemes for steady and unsteady flows].Recently the space–time RD approach has been developed by several researchers [Int. J. Numer. Methods Fluids 40 (2002) 573; J. Comput. Phys. 188 (2003) 16; Á.G. Csı́k, Upwind residual distribution schemes for general hyperbolic conservation laws and application to ideal magnetohydrodynamics, PhD thesis, Katholieke Universiteit Leuven, 2002; J. Comput. Phys. 188 (2003) 16; R. Abgrall; M. Mezine, Construction of second order accurate monotone and stable residual distribution schemes for unsteady flow problems] which allows to perform second order accurate unsteady inviscid computations. In this paper we follow the work done in [Int. J. Numer. Methods Fluids 40 (2002) 573; Á.G. Csı́k, Upwind residual distribution schemes for general hyperbolic conservation laws and application to ideal magnetohydrodynamics, PhD thesis, Katholieke Universiteit Leuven, 2002]. In this approach the space–time domain is discretized and solved as a (d+1)-dimensional problem, where d is the number of space dimensions. In [Int. J. Numer. Methods Fluids 40 (2002) 573; Á.G. Csı́k, Upwind residual distribution schemes for general hyperbolic conservation laws and application to ideal magnetohydrodynamics, PhD thesis, Katholieke Universiteit Leuven, 2002] it is shown that thanks to the multidimensional upwinding of the RD method, the solution of the unsteady problem can be decoupled into sub-problems on space–time slabs composed of simplicial elements, allowing to obtain a true time marching procedure. Moreover, the method is implicit and unconditionally stable for arbitrary large time-steps if positive RD schemes are employed.We present further development of the space–time approach of [Int. J. Numer. Methods Fluids 40 (2002) 573; Á.G. Csı́k, Upwind residual distribution schemes for general hyperbolic conservation laws and application to ideal magnetohydrodynamics, PhD thesis, Katholieke Universiteit Leuven, 2002] by extending it to laminar viscous flow computations. A Petrov–Galerkin treatment of the viscous terms [Project Report 2002-06, von Karman Institute for Fluid Dynamics, Belgium, 2002; J. Dobeš, Implicit space–time method for laminar viscous flow], consistent with the space–time formulation has been investigated, implemented and tested. Second order accuracy in both space and time was observed on unstructured triangulation of the spatial domain.The solution is obtained at each time-step by solving an implicit non-linear system of equations. Here, following [Int. J. Numer. Methods Fluids 40 (2002) 573; Á.G. Csı́k, Upwind residual distribution schemes for general hyperbolic conservation laws and application to ideal magnetohydrodynamics, PhD thesis, Katholieke Universiteit Leuven, 2002], we formulate the solution of this system as a steady state problem in a pseudo-time variable. We discuss the efficiency of an explicit Euler forward pseudo-time integrator compared to the implicit Euler. When applied to viscous computation, the implicit method has shown speed-ups of more than a factor 50 in terms of computational time.  相似文献   

15.
16.
Image segmentation using a multilayer level-set approach   总被引:1,自引:0,他引:1  
We propose an efficient multilayer segmentation method based on implicit curve evolution and on variational approach. The proposed formulation uses the minimal partition problem as formulated by D. Mumford and J. Shah, and can be seen as a more efficient extension of the segmentation models previously proposed in Chan and Vese (Scale-Space Theories in Computer Vision, Lecture Notes in Computer Science, Vol. 1682, pp. 141–151, 1999, IEEE Trans Image Process 10(2):266–277, 2001), and Vese and Chan (Int J Comput Vis 50(3):271–293, 2002). The set of unknown discontinuities is represented implicitly by several nested level lines of the same function, as inspired from prior work on island dynamics for epitaxial growth (Caflisch et al. in Appl Math Lett 12(4):13, 1999; Chen et al. in J Comput Phys 167:475, 2001). We present the Euler–Lagrange equations of the proposed minimizations together with theoretical results of energy decrease, existence of minimizers and approximations. We also discuss the choice of the curve regularization and conclude with several experimental results and comparisons for piecewise-constant segmentation of gray-level and color images.  相似文献   

17.
ABSTRACT

We introduce a new ergodic algorithm for solving equilibrium problems over the fixed point set of a nonexpansive mapping. In contrast to the existing one in Kim [The Bruck's ergodic iteration method for the Ky Fan inequality over the fixed point set. Int. J. Comput. Math. 94 (2017), pp. 2466–2480], our algorithm uses self-adaptive step sizes. Thanks to that, the proposed algorithm converges under milder conditions. Moreover, at each step of our algorithm, instead of solving strongly convex problems, we only have to compute a subgradient of a convex function. Hence, our algorithm has lower computational cost.  相似文献   

18.
This paper describes a new version of the generalized finite element method, originally developed [Int. J. Numer. Methods Engrg. 47 (2000) 1401; Comput. Methods Appl. Mech. Engrg. 181 (2000) 43; The design and implementation of the generalized finite element method, Ph.D. thesis, Texas A&M University, College Station, Texas, August 2000; Comput. Methods Appl. Mech. Engrg. 190 (2001) 4081], which is well suited for problems set in domains with a large number of internal features (e.g. voids, inclusions, cracks, etc.). The main idea is to employ handbook functions constructed on subdomains resulting from the mesh-discretization of the problem domain. The proposed new version of the GFEM is shown to be robust with respect to the spacing of the features and is capable of achieving high accuracy on meshes which are rather coarse relative to the distribution of the features.  相似文献   

19.
20.
In their paper “Tight bound on Johnson's algorithm for maximum satisfiability” [J. Comput. System Sci. 58 (3) (1999) 622-640] Chen, Friesen and Zheng provided a tight bound on the approximation ratio of Johnson's algorithm for Maximum Satisfiability [J. Comput. System Sci. 9 (3) (1974) 256-278]. We give a simplified proof of their result and investigate to what extent it may be generalized to non-Boolean domains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号