共查询到20条相似文献,搜索用时 15 毫秒
1.
An edge segmentation method utilizing cooperative computation and multi-scale analysis is presented. The method is based on directional proximity operators and a two-scale cooperative algorithm. The processes of edge grouping, skeletonization, gap filling and thresholding cooperate by exchanging their input and output data. The segmentation process uses interchangingly two channels differing by a set of three scaling parameters. A coarse-fine strategy is proposed. The method is useful for the extraction of linear edge segments in three-dimensional robot vision systems. 相似文献
2.
《Computer Vision and Image Understanding》2010,114(7):731-744
One approach to image segmentation defines a function of image partitions whose maxima correspond to perceptually salient segments. We extend previous approaches following this framework by requiring that our image model sharply decreases in its power to organize the image as a segment’s boundary is perturbed from its true position. Instead of making segment boundaries prefer image edges, we add a term to the objective function that seeks a sharp change in fitness with respect to the entire contour’s position, generalizing from edge detection’s search for sharp changes in local image brightness. We also introduce a prior on the shape of a salient contour that expresses the observed multi-scale distribution of contour curvature for physical contours. We show that our new term correlates strongly with salient structure. We apply our method to real images and verify that the new term improves performance. Comparisons with other state-of-the-art approaches validate our method’s advantages. 相似文献
3.
Antonio Robles-Kelly Author Vitae Edwin R. Hancock Author Vitae 《Pattern recognition》2004,37(7):1387-1405
This paper presents an iterative spectral framework for pairwise clustering and perceptual grouping. Our model is expressed in terms of two sets of parameters. Firstly, there are cluster memberships which represent the affinity of objects to clusters. Secondly, there is a matrix of link weights for pairs of tokens. We adopt a model in which these two sets of variables are governed by a Bernoulli model. We show how the likelihood function resulting from this model may be maximised with respect to both the elements of link-weight matrix and the cluster membership variables. We establish the link between the maximisation of the log-likelihood function and the eigenvectors of the link-weight matrix. This leads us to an algorithm in which we iteratively update the link-weight matrix by repeatedly refining its modal structure. Each iteration of the algorithm is a three-step process. First, we compute a link-weight matrix for each cluster by taking the outer-product of the vectors of current cluster-membership indicators for that cluster. Second, we extract the leading eigenvector from each modal link-weight matrix. Third, we compute a revised link weight matrix by taking the sum of the outer products of the leading eigenvectors of the modal link-weight matrices. 相似文献
4.
Dynamic contour: A texture approach and contour operations 总被引:11,自引:0,他引:11
The large morphometric variability in biomedical organs requires an accurate fitting method for a pregenerated contour model. We propose a physically based approach to fitting 2D shapes using texture feature vectors and contour operations that allow even automatic contour splitting. To support shrinkage of the contour and obtain a better fit for the concave parts an area force is introduced. When two parts of the active contour approach each other, it divides. The contour undergoing elastic deformation is considered as a set of masses linked by springs with their natural lengths set to zero. We also propose a method for automatic estimation of some model parameters based on a histogram of image forces along a contour. 相似文献
5.
A computational approach to edge detection 总被引:65,自引:0,他引:65
This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge. 相似文献
6.
7.
A geometric approach to edge detection 总被引:2,自引:0,他引:2
This paper describes edge detection as a composition of four steps: conditioning, feature extraction, blending, and scaling. We examine the role of geometry in determining good features for edge detection and in setting parameters for functions to blend the features. We find that: (1) statistical features such as the range and standard deviation of window intensities can be as effective as more traditional features such as estimates of digital gradients; (2) blending functions that are roughly concave near the origin of feature space ran provide visually better edge images than traditional choices such as the city-block and Euclidean norms; (3) geometric considerations ran be used to specify the parameters of generalized logistic functions and Takagi-Sugeno input-output systems that yield a rich variety of edge images; and (4) understanding the geometry of the feature extraction and blending functions is the key to using models based on computational learning algorithms such as neural networks and fuzzy systems for edge detection. Edge images derived from a digitized mammogram are given to illustrate various facets of our approach 相似文献
8.
A new approach to image segmentation is presented using a variation framework. Regarding the edge points as interpolating points and minimizing an energy functional to interpolate a smooth threshold surface it carries out the image segmentation. In order to preserve the edge information of the original image in the threshold surface, without unduly sharping the edge of the image, a non-convex energy functional is adopted. A relaxation algorithm with the property of global convergence, for solving the optimization problem, is proposed by introducing a binary energy. As a result the non-convex optimization problem is transformed into a series of convex optimization problems, and the problem of slow convergence or nonconvergence is solved. The presented method is also tested experimentally. Finally the method of determining the parameters in optimizing is also explored. 相似文献
9.
Peter N. Belhumeur 《International Journal of Computer Vision》1996,19(3):237-260
We develop a computational model for binocular stereopsis, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images. We design our model within a Bayesian framework, making explicit all of our assumptions about the nature of image coding and the structure of the world. We start by deriving our model for image formation, introducing a definition of half-occluded regions and deriving simple equations relating these regions to the disparity function. We show that the disparity function alone contains enough information to determine the half-occluded regions. We use these relations to derive a model for image formation in which the half-occluded regions are explicitly represented and computed. Next, we present our prior model in a series of three stages, or worlds, where each world considers an additional complication to the prior. We eventually argue that the prior model must be constructed from all of the local quantities in the scene geometry-i.e., depth, surface orientation, object boundaries, and surface creases. In addition, we present a new dynamic programming strategy for estimating these quantities. Throughout the article, we provide motivation for the development of our model by psychophysical examinations of the human visual system. 相似文献
10.
We introduce a new approach using the Bayesian framework for the reconstruction of sparse Synthetic Aperture Radar (SAR) images. The algorithm, named SLIM, can be thought of as a sparse signal recovery algorithm with excellent sidelobe suppression and high resolution properties. For a given sparsity promoting prior, SLIM cyclically minimizes a regularized least square cost function. We show how SLIM can be used for SAR image reconstruction as well as SAR image enhancement. We evaluate the performance of SLIM by using realistically simulated complex-valued backscattered data from a backhoe vehicle. The numerical results show that SLIM can satisfactorily suppress the sidelobes and yield higher resolution than the conventional matched filter or delay-and-sum (DAS) approach. SLIM outperforms the widely used compressive sampling matching pursuit (CoSaMP) algorithm, which requires the delicate choice of user parameters. Compared with the recently developed iterative adaptive approach (IAA), which iteratively solves a weighted least squares problem, SLIM is much faster. Due to the computational complexity involved with SAR imaging, we show how SLIM can be made even more computationally efficient by utilizing the fast Fourier transform (FFT) and conjugate gradient (CG) method to carry out its computations. Furthermore, since SLIM is derived under the Bayesian model, the a posteriori distribution given by the algorithm provides us with a confident measure regarding the statistical properties of the SAR image pixels. 相似文献
11.
12.
Goh Wee Leng
D. P. Mital
Tay Sze Yong
Tan Kok Kang
《Engineering Applications of Artificial Intelligence》1994,7(6):639-651To efficiently store the information found in paper documents, text and non-text regions need to be separated. Non-text regions include half-tone photographs and line diagrams. The text regions can be converted (via an optical character reader) to a computer-searchable form, and the non-text regions can be extracted and preserved in compressed form using image-compression algorithms. In this paper, an effective system for automatically segmenting a document image into regions of text and non-text is proposed. The system first performs an adaptive thresholding to obtain a binarized image. Subsequently the binarized image is smeared using a run-length differential algorithm. The smeared image is then subjected to a text characteristic filter to remove error smearing of non-text regions. Next, baseline cumulative blocking is used to rectangularize the smeared region. Finally, a text block growing algorithm is used to block out a text sentence. The recognition of text is carried out on a text sentence basis. 相似文献
13.
Sparacino G Milani S Arslan E Cobelli C 《Computer methods and programs in biomedicine》2002,68(3):233-248
Several approaches, based on different assumptions and with various degree of theoretical sophistication and implementation complexity, have been developed for improving the measurement of evoked potentials (EP) performed by conventional averaging (CA). In many of these methods, one of the major challenges is the exploitation of a priori knowledge. In this paper, we present a new method where the 2nd-order statistical information on the background EEG and on the unknown EP, necessary for the optimal filtering of each sweep in a Bayesian estimation framework, is, respectively, estimated from pre-stimulus data and obtained through a multiple integration of a white noise process model. The latter model is flexible (i.e. it can be employed for a large class of EP) and simple enough to be easily identifiable from the post-stimulus data thanks to a smoothing criterion. The mean EP is determined as the weighted average of the filtered sweeps, where each weight is inversely proportional to the expected value of the norm of the correspondent filter error, a quantity determinable thanks to the employment of the Bayesian approach. The performance of the new approach is shown on both simulated and real auditory EP. A signal-to-noise ratio enhancement is obtained that can allow the (possibly automatic) identification of peak latencies and amplitudes with less sweeps than those required by CA. For cochlear EP, the method also allows the audiology investigator to gather new and clinically important information. The possibility of handling single-sweep analysis with further development of the method is also addressed. 相似文献
14.
Kehtarnavaz N. deFigueiredo R.J.P. 《IEEE transactions on pattern analysis and machine intelligence》1988,10(5):707-713
The Darboux vector contains both the curvature and torsion of a three-dimensional (3-D) image as its components. Curvature measures how sharply a curve is turning while torsion measures the extent of its twist in 3-D space. Curvature and torsion completely define the shape of a 3-D curve. A scheme is presented that uses the length of this vector, also called the total curvature, for the segmentation of 3-D contours. A quintic B-spline is used in this formulation to obtain the total curvature for noisy data. Examples of nonnoisy and noisy data illustrate the merit of the scheme 相似文献
15.
In this paper, we attempt to place segmentation schemes utilizing the pyramid architecture on a firm footing. We show that there are some images which cannot be segmented in principle. An efficient segmentation scheme is also developed using pyramid relinking. This scheme will normally have a time complexity which is a sublinear function of the image diameter, which compares favorably to other schemes. The efficacy of our approach to segmentation using pyramid schemes is demonstrated in the context of region matching. The global features we use are compared to those used in previous approaches and this comparison will indicate that our approach is more robust than the standard moment-based techniques. 相似文献
16.
This paper describes a knowledge-based approach to the problem of locating and segmenting the iris in images showing close-up human eyes. This approach is inspired in the expert system’s paradigm but, due the specific processing problems associated with image analysis, uses direct encoding of the “decision rules”, instead of a classic, formalized, knowledge base. The algorithm involves a succession of phases that deal with image pre-processing, pupil location, iris location, combination of pupil and iris, eyelids detection, and filtering of reflections. The development was iterative, based on successive improvements tested over a set of training images. The results that were achieved indicate that this global approach can be useful to solve image analysis problems over which human “experts” have better performance than the present computer-based solutions. 相似文献
17.
18.
The problem of image segmentation has been investigated with a focus on inhomogeneous multiphase image segmentation. Intensity inhomogeneity is an undesired phenomenon that represents the main obstacle for magnetic resonance (MR) and natural images segmentation. The complex images usually contain an arbitrary number of objects. This paper presents a new multiphase active contour model method for simultaneous regions classification of MR images and natural images without bias field correction. In this model, a simple and effective initialization method is taken to speed up the curve evolution toward final results; a new multiphase level set method is proposed to segment the multiple regions. This model not only extracts multiple objects simultaneously, but also provides smooth and accurate boundaries of the objects. The results for experiments on several synthetic and real images demonstrate the effectiveness and accuracy of our model. 相似文献
19.
New statistical techniques for the edge detection problem in images are developed. The image is modeled by signal and noise, which are independent, additive, Gaussian, and autoregressive in two dimensions. The optimal solution, in terms of statistical decision theory, leads to a test that decides among multiple, composite, overlapping hypotheses. A redefinition of the problem, involving nonoverlapping hypotheses, allows the formulation of a computationally attractive scheme. Results are presented with both simulated data and real satellite images. A comparison with standard gradient techniques is made. 相似文献
20.
A Bayesian segmentation methodology for parametric image models 总被引:2,自引:0,他引:2
LaValle S.M. Hutchinson S.A. 《IEEE transactions on pattern analysis and machine intelligence》1995,17(2):211-217
Region-based image segmentation methods require some criterion for determining when to merge regions. This paper presents a novel approach by introducing a Bayesian probability of homogeneity in a general statistical context. The authors' approach does not require parameter estimation and is therefore particularly beneficial for cases in which estimation-based methods are most prone to error: when little information is contained in some of the regions and, therefore, parameter estimates are unreliable. The authors apply this formulation to three distinct parametric model families that have been used in past segmentation schemes: implicit polynomial surfaces, parametric polynomial surfaces, and Gaussian Markov random fields. The authors present results on a variety of real range and intensity images 相似文献