首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12篇
  免费   0篇
金属工艺   1篇
无线电   3篇
自动化技术   8篇
  2016年   2篇
  2012年   1篇
  2007年   2篇
  2005年   1篇
  2003年   2篇
  2002年   1篇
  2001年   1篇
  2000年   1篇
  1977年   1篇
排序方式: 共有12条查询结果,搜索用时 15 毫秒
1.
Vanishing point detection without any a priori information   总被引:5,自引:0,他引:5  
Even though vanishing points in digital images result from parallel lines in the 3D scene, most of the proposed detection algorithms are forced to rely heavily either on additional properties (like orthogonality or coplanarity and equal distance) of the underlying 3D lines, or on knowledge of the camera calibration parameters, in order to avoid spurious responses. In this work, we develop a new detection algorithm that relies on the Helmoltz principle recently proposed for computer vision by Desolneux et al (2001; 2003), both at the line detection and line grouping stages. This leads to a vanishing point detector with a low false alarms rate and a high precision level, which does not rely on any a priori information on the image or calibration parameters, and does not require any parameter tuning.  相似文献   
2.
3.
Area openings and closings are morphological filters which efficiently suppress impulse noise from an image, by removing small connected components of level sets. The problem of an objective choice of threshold for the area remains open. Here, a mathematical model for random images will be considered. Under this model, a Poisson approximation for the probability of appearance of any local pattern can be computed. In particular, the probability of observing a component with size larger than k in pure impulse noise has an explicit form. This permits the definition of a statistical test on the significance of connected components, thus providing an explicit formula for the area threshold of the denoising filter, as a function of the impulse noise probability parameter. Finally, using threshold decomposition, a denoising algorithm for grey level images is proposed.David Coupier is 25 years old, he is PhD student at the MAP5. He has studied mathematics at the University of Orsay—Paris XI. He is working on zero-one laws and Poisson approximations for random images. Web page: Agnés Desolneux is 30 years old, she is CNRS researcher at the MAP5. She defended her PhD thesis in applied mathematics in 2000 under the direction of Jean-Michel Morel at the ENS Cachan. She is working on statistical methods in image analysis. Web page: Bernard Ycart is 45 years old, he is Professor of mathematics at the University Paris 5 and director of the MAP5 (FRE CNRS 2428). He is specialist of applied probabilities (Markov processes, stochastic algorithms). Web page:  相似文献   
4.
Dequantizing image orientation   总被引:1,自引:0,他引:1  
We address the problem of computing a local orientation map in a digital image. We show that standard image gray level quantization causes a strong bias in the repartition of orientations, hindering any accurate geometric analysis of the image. In continuation, a simple dequantization algorithm is proposed, which maintains all of the image information and transforms the quantization noise in a nearby Gaussian white noise (we actually prove that only Gaussian noise can maintain isotropy of orientations). Mathematical arguments are used to show that this results in the restoration of a high quality image isotropy. In contrast with other classical methods, it turns out that this property can be obtained without smoothing the image or increasing the signal-to-noise ratio (SNR). As an application, it is shown in the experimental section that, thanks to this dequantization of orientations, such geometric algorithms as the detection of nonlocal alignments can be performed efficiently. We also point out similar improvements of orientation quality when our dequantization method is applied to aliased images.  相似文献   
5.
The a contrario framework for the detection of convergences in an image consists in counting, for each tested point, the number of elementary linear structures that converge to it (up to a given precision), and when this number is high enough, the point is declared to be a meaningful point of convergence. This is so far analogous to a Hough transform, and the main contribution of the a contrario framework is to provide a statistical definition of what “high enough” means: it means large enough to ensure that in an image where all elementary structures are distributed according to a background noise model, there is, in expectation, less than 1 detection. Our aim in this paper is to discuss, from a methodological viewpoint, the choice and the influence of the background noise model. This model is generally taken as the uniform independent distribution on elementary linear structures, and here, we discuss the case of images that have a natural anisotropic distribution of structures. Our motivating example is the one of mammograms in which we would like to detect stellate patterns (that appear as local convergences of spicules), and in which the linear structures are naturally oriented towards the nipple. In this paper, we show how to tackle the two problems of (a) defining and estimating an anisotropic “normal” distribution from an image, and of (b) computing the probability that a random structure, following an anisotropic distribution, converges to any given convex region. We illustrate the whole approach with several examples.  相似文献   
6.
In this work, we propose a method to segment a 1-D histogram without a priori assumptions about the underlying density function. Our approach considers a rigorous definition of an admissible segmentation, avoiding over and under segmentation problems. A fast algorithm leading to such a segmentation is proposed. The approach is tested both with synthetic and real data. An application to the segmentation of written documents is also presented. We shall see that this application requires the detection of very small histogram modes, which can be accurately detected with the proposed method.  相似文献   
7.
Corrosion problems in chloride containing media: possible solution by some stainless special steels The increasing water pollution forces the chemical industry to use water with increasing chloride content for cooling and other purposes. This trend brings about increasing corrosion danger, in particular pitting, stress corrosion cracking and corrosion fatigue as well as crevice corrosion. The present paper deals with some steels characterized by resistance to these specific corrosion phenomena. A steel containing (%) 21 Cr., 7.5 Ni, 2.5 Mo, 1.5 Cu, to 2 Mn, to 1 Si and 0.06 C is particularly resistant to stress corrosion cracking. It contains 30 to 50% ferrite in an austenitic matrix. Even in Mg chloride solutions it may be kept under a load of 7 kg/mm2 without stress corrosion occurring (with a steel of the 18 10 CrNiMo type the admissible load is only 2 kg/mm2). A steel containing (%) 25 Ni, 21 Cr, 4.5 Mo, 1.5 Cu, to 1 Si, to 2 Mn, and 0.02 C has a broad passivity range and is resistant to general corrosion in acid reducing media and phosphoric acid of all concentrations. A ferritic steel containing (%) 26 Cr. 1 Mo and minor additions of C, Mn, Si, Cu, Ni and nitrogen is resistant to stress corrosion cracking in neutral chloride solutions and general corrosion in oxidizing and neutral media, even against hydrogen sulfid and organic acids; it is beyond that lergely resistant to pitting in chloride solutions.  相似文献   
8.
In this paper, we address a complex image registration issue arising while the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequently encountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast-enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weighs the two mixture components. The registration problem is formulated both as an energy minimization problem and as a maximum a posteriori estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated simultaneously, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into account spatial variations of intensity dependencies while keeping a good registration accuracy.  相似文献   
9.
Edge Detection by Helmholtz Principle   总被引:1,自引:0,他引:1  
We apply to edge detection a recently introduced method for computing geometric structures in a digital image, without any a priori information. According to a basic principle of perception due to Helmholtz, an observed geometric structure is perceptually meaningful if its number of occurences would be very small in a random situation: in this context, geometric structures are characterized as large deviations from randomness. This leads us to define and compute edges and boundaries (closed edges) in an image by a parameter-free method. Maximal detectable boundaries and edges are defined, computed, and the results compared with the ones obtained by classical algorithms.  相似文献   
10.
Meaningful Alignments   总被引:1,自引:1,他引:0  
We propose a method for detecting geometric structures in an image, without any a priori information. Roughly speaking, we say that an observed geometric event is meaningful if the expectation of its occurences would be very small in a random image. We discuss the apories of this definition, solve several of them by introducing maximal meaningful events and analyzing their structure. This methodology is applied to the detection of alignments in images.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号