首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The probability density function for the cross ratio is obtained under the hypothesis that the four image points have independent, identical, Gaussian distributions. The density function has six symmetries which are closely linked to the six different values of the cross ratio obtained by permuting the quadruple of points from which the cross ratio is calculated. The density function has logarithmic singularities corresponding to values of the cross ratio for which two of the four points are coincident. The cross ratio forms the basis of a simple system for recognising or classifying quadruples of collinear image points. The performance of the system depends on the choice of rule for deciding whether four image points have a given cross ratio . A rule is stated which is computationally straightforward and which takes into account the effects on the cross ratio of small errors in locating the image points. Two key properties of the rule are the probabilityR of rejection, and the probabilityF of a false alarm. The probabilitiesR andF depend on a thresholdt in the decision rule. There is a trade off betweenR andF obtained by varyingt. It is shown that the trade off is insensitive to the given cross ratio . LetF w =max o {F}. ThenR, F w are related approximately by , provided –1 F w 4. In the equation, is the accuracy with which image points can be located relative to the width of the image, andr F is a constant known as the normalised false alarm rate. In the range –1 F w 4 the probabilitiesR andF w are related approximately by . The value ofr F is 14.37. The consequences of these relations between R and Fw are discussed. It is conjectured that the above general form of the trade off betweenR andF w holds for a wide class of scalar invariants that could be used for model based object recognition. Invariants with the same type of trade off between the probability of rejection and the probability of false alarm are said to be nondegenerate for model based vision.  相似文献   

2.
An efficient procedure for the reliability analysis of frame structures with respect to the buckling limit state is proposed under the assumption that no imperfections are present and that the elastic parameters are uncertain and modeled as random variables. The approach allows a deeper investigation of structures which are not sensitive to imperfections. The procedure relies on a Response Surface Method adopting simple ratio of polynomials without cross-terms as performance function. Such a relationship approximates analytically the dependence between the buckling load and the basic variables furnishing a limit state equation which is very close to the exact one when a proper experimental design is adopted. In this way a Monte Carlo Simulation applied to the response surface leads to a good approximation with low computational effort. Several numerical examples show the accuracy and effectiveness of the method varying structural complexity, correlation between basic variables and their distribution.  相似文献   

3.
Probabilistic analysis of a static frame model   总被引:1,自引:0,他引:1  
This paper describes our efforts during our participation in the Sandia Validation Workshop. The focus of the paper is the calibration of material models and simulation of random fields to characterize the variations of material properties across spatial field. Both parametric and non-parametric methods were used to represent uncertainty. Part of the challenge of this problem is the small amount of data that is available for the necessary probabilistic analyses in support of calibration, validation, accreditation and prediction activities. The analysis methods and corresponding results are described.  相似文献   

4.
5.
Fan  Jiang  Liao  Huming  Wang  Hao  Hu  Junheng  Chen  Zhiying  Lu  Jian  Li  Bo 《Structural and Multidisciplinary Optimization》2018,57(1):373-392
Structural and Multidisciplinary Optimization - A novel surrogate model based on the Local Maximum-Entropy (LME) approximation is proposed in this paper. By varying the degrees of locality, the...  相似文献   

6.
In this paper, a genetic model based on the operations of recombination and mutation is studied and applied to combinatorial optimization problems. Results are: 1. The equations of the deterministic dynamics in the thermodynamic limit (infinite populations) are derived and, for a sufficiently small mutation rate, the attractors are characterized; 2. A general approximation algorithm for combinatorial optimization problems is designed. The algorithm is applied to the Max Ek-Sat problem, and the quality of the solution is analyzed. It is proved to be optimal for k > or = 3 with respect to the worst case analysis; for Max E3-Sat the average case performances are experimentally compared with other optimization techniques.  相似文献   

7.
Stereo vision systems are widely used for autonomous robot navigation. Most of them apply local window based methods for real-time purposes. Normalized cross correlation (NCC) is notorious for its high computational cost, though it is robust to different illumination conditions between two cameras. It is rarely used in real-time stereo vision systems. This paper proposes an efficient normalized cross correlation calculation method based on the integral image technique. Its computational complexity has no relationship to the size of the matching window. Experimental results show that our algorithm can generate the same results as traditional normalized cross correlation with a much lower computational cost. Our algorithm is suitable for planet rover navigation.  相似文献   

8.
Two experimental systems for query-based visual analysis are described. The first simulates an image sequence of moving, dividing cells with simple rules and monitors significant visual events. The second processes single raw images of real cells. Both invoke appropriate processing using explicit knowledge to respond to user queries. It is proposed that this selectivity is an essential feature for any system to analyse raw image sequences of moving, dividing cells as the computational expense of allowing all possible processing to proceed is enormous. Processing as required by the query allows adaptive strategies (e.g. different resolutions and focal processing) to be utilized and gives an effective attentional control structure to the system.  相似文献   

9.
10.
This study is performed on the four 2.5 MWe emergency diesel generator (EDG) sets of Hydro-Quebec Gentilly-2 Nuclear Power Station. EDGs are safety related systems for the case of the loss of off-site power. This study establishes the basis of an enhanced preventative maintenance and periodic testing program with the objective of improving the long-term reliability and availability of the EDG. It is also the first step to a PSA program based on the real historical data of the system.  相似文献   

11.
The internet of things (IoT) attracts great interest in many application domains concerned with monitoring and control of physical phenomena. However, application development is still one of the main hurdles to a wide adoption of IoT technology. Application development is done at a low level, very close to the operating system and requires programmers to focus on low-level system issues. The underlying APIs can be very complicated and the amount of data collected can be huge. This can be very hard to deal with as a developer. In this paper, we present a runtime model based approach to IoT application development. First, the manageability of sensor devices is abstracted as runtime models that are automatically connected with the corresponding systems. Second, a customized model is constructed according to a personalized application scenario and the synchronization between the customized model and sensor device runtime models is ensured through model transformation. Thus, all the application logic can be carried out by executing programs on the customized model. An experiment on a real-world application scenario demonstrates the feasibility, effectiveness, and benefits of the new approach to IoT application development.  相似文献   

12.
We have developed a computational model for texture perception which has physiological relevance and correlates well with human performance. The model attempts to simulate the visual processing characteristics by incorporating mechanisms tuned to detect luminance-polarity, orientation, spatial frequency and color, which are characteristic features of any textural image. We obtained a very good correlation between the model's simulation results and data from psychophysical experiments with a systematically selected set of visual stimuli with texture patterns defined by spatial variations in color, luminance, and orientation. In addition, the model predicts correctly texture segregation performance with key benchmarks and natural textures. This represents a first effort to incorporate chromatic signals in texture segregation models of psychophysical relevance, most of which have treated grey-level images so far. Another novel feature of the model is the extension or the concept of spatial double opponency to domains beyond color, such as orientation and spatial frequency. The model has potential applications in the areas of image processing, machine vision and pattern recognition, and scientific visualization.  相似文献   

13.
Many vision problems can be formulated as minimization of appropriate energy functionals. These energy functionals are usually minimized, based on the calculus of variations (Euler-Lagrange equation). Once the Euler-Lagrange equation has been determined, it needs to be discretized in order to implement it on a digital computer. This is not a trivial task and, is moreover, error- prone. In this paper, we propose a flexible alternative. We discretize the energy functional and, subsequently, apply the mathematical concept of algorithmic differentiation to directly derive algorithms that implement the energy functional's derivatives. This approach has several advantages: First, the computed derivatives are exact with respect to the implementation of the energy functional. Second, it is basically straightforward to compute second-order derivatives and, thus, the Hessian matrix of the energy functional. Third, algorithmic differentiation is a process which can be automated. We demonstrate this novel approach on three representative vision problems (namely, denoising, segmentation, and stereo) and show that state-of-the-art results are obtained with little effort.  相似文献   

14.
The current work addresses the problem of 3D model tracking in the context of monocular and stereo omnidirectional vision in order to estimate the camera pose. To this end, we track 3D objects modeled by line segments because the straight line feature is often used to model the environment. Indeed, we are interested in mobile robot navigation using omnidirectional vision in structured environments. In the case of omnidirectional vision, 3D straight lines are projected as conics in omnidirectional images. Under certain conditions, these conics may have singularities.In this paper, we present two contributions. We, first, propose a new spherical formulation of the pose estimation withdrawing singularities, using an object model composed of lines. The theoretical formulation and the validation on synthetic images thus show that the new formulation clearly outperforms the former image plane one. The second contribution is the extension of the spherical representation to the stereovision case. We consider in the paper a sensor which combines a camera and four mirrors. Results in various situations show the robustness to illumination changes and local mistracking. As a final result, the proposed new stereo spherical formulation allows us to localize online a robot indoor and outdoor whereas the classical formulation fails.  相似文献   

15.
This paper investigates the possible applications of dynamical fuzzy systems to control nonlinear plants with asymptotically stable zero dynamics using a fuzzy nonlinear internal model control strategy. The developed strategy consists in including a dynamical Takagi-Sugeno fuzzy model of the plant within the control structure. In this way, the controller design simply results in a fuzzy model inversion. In this framework, the originality of the presented work lies in the use of a dynamical fuzzy model and its inversion. In order to be able to implement the control structure, two crucial points have to be addressed in the considered fuzzy context, on the one hand the model representation and identification, on the other, the model inversion. As the fuzzy system can be viewed as a collection of elementary subsystems, its inversion is approached here in a local way, i.e., on the elementary subsystems capable to provide an inverse solution. In this case, the inversion of the global fuzzy system is thus tackled by inversion of some of its components. By doing so, exact inversion is obtained and offset-free performances are ensured. In order to guarantee a desired regulation behavior and robustness of stability of the control system, the fuzzy controller is connected in series with a robustness filter. The potential of the proposed method is demonstrated with simulation examples.  相似文献   

16.
In order to use interpolated data wisely, it is important to have reliability and confidence measures associated with it. A method for computing the reliability at each point of any linear functional of a surface reconstructed using regularization is presented. The proposed method is to define a probability structure on the class of possible objects and compute the variance of the corresponding random variable. This variance is a natural measure for uncertainty, and experiments have shown it to correlate well with reality. The probability distribution used is based on the Boltzmann distribution. The theoretical part of the work utilizes tools from classical analysis, functional analysis, and measure theory on function spaces. The theory was tested and applied to real depth images. It was also applied to formalize a paradigm of optimal sampling, which was successfully tested on real depth images  相似文献   

17.
The problem of clustering subpopulations on the basis of samples is considered within a statistical framework: a distribution for the variables is assumed for each subpopulation and the dissimilarity between any two populations is defined as the likelihood ratio statistic which compares the hypothesis that the two subpopulations differ in the parameter of their distributions to the hypothesis that they do not. A general algorithm for the construction of a hierarchical classification is described which has the important property of not having inversions in the dendrogram. The essential elements of the algorithm are specified for the case of well-known distributions (normal, multinomial and Poisson) and an outline of the general parametric case is also discussed. Several applications are discussed, the main one being a novel approach to dealing with massive data in the context of a two-step approach. After clustering the data in a reasonable number of ‘bins’ by a fast algorithm such as k-Means, we apply a version of our algorithm to the resulting bins. Multivariate normality for the means calculated on each bin is assumed: this is justified by the central limit theorem and the assumption that each bin contains a large number of units, an assumption generally justified when dealing with truly massive data such as currently found in modern data analysis. However, no assumption is made about the data generating distribution.
Antonio CiampiEmail:

Antonio Ciampi   received his M.Sc. and Ph.D. degrees from Queen's University, Kingston, Ontario, Canada in 1973. He taught at the University of Zambia from 1973 to 1977. Returning to Canada he worked as statitician in the Treasury of the Ontario Government. From 1978 to 1985, he was Senior Scientist in the Ontario Cancer Institute, Toronto, and taught at the University of Toronto. In 1985 he moved to Montreal where he is Associate Professor in the Department of Epidemiology, Biostatistics and Occupational Health, McGill University. He has also been Senior Scientist of the Montreal Children's Hospital Research Instititue, in the Montreal Heart Institute and in the St. Mary's Hospital Community Health Research Unit. His research interest include Statistical Learning, Data Mining and Statistical Modeling. Yves Lechevallier   In 1976 he joined the INRIA where he was engaged in the project of Clustering and Pattern Recognition. Since 1988 he has been teaching Clustering, Neural Network and Data Mining at the University of PARIS-IX, CNAM and ENSAE. He specializes in Mathematical Statistics, Applied Statistics, Data Analysis and Classification. Current Research Interests: (1) Clustering algorithm (Dynamic Clustering Method, Kohonen Maps, Divisive Clustering Method); (2) Discrimination Problems and Decision Tree Methods; Build an efficient Neural Network by Classification Tree. Manuel Castejón Limas   received his engineering degree from the Universidad de Oviedo in 1999 and his Ph.D. degree from the Universidad de La Rioja in 2004. From 2002 he teaches project management at the Universidad de Leon. His research is oriented towards the development of data analysis procedures that may aid project managers on their decision making processes. Ana González Marcos   received her M.Sc. and Ph.D. degrees from the University of La Rioja, Spain. In 2003, she joined the University of León, Spain, where she works as a Lecturer in the Department of Mechanical, Informatic and Aerospace Engineering. Her research interests include the application of multivariate analysis and artificial intelligence techniques in order to improve the quality of industrial processes.   相似文献   

18.
Surface precipitation estimation is very important in hydrologic forecast. To account for the influence of the neighbors on the precipitation of an arbitrary grid in the network, Bayesian networks and Markov random field were adopted to estimate surface precipitation. Spherical coordinates and the expectation-maximization (EM) algorithm were used for region interpolation, and for estimation of the precipitation of arbitrary point in the region. Surface precipitation estimation of seven precipitation station...  相似文献   

19.
车辆几何模型是基于视觉传感器的车辆检测系统的重要部分。为了提高前方车辆检测的鲁棒性,利用车尾轮廓的几何特征构建了世界坐标系的车尾模型,通过坐标系变换给出了视觉传感器坐标系的车尾模型,进而导出图像车尾模型,并给出了车尾模型参数的表达式及其与现实世界中车尾几何参数的对应关系,明确建立了图像中车辆位置和大小的关系。实验表明:将该模型应用于车辆检测可以抑制图像中不符合车辆透视关系的错误识别,能有效提高车辆检测的鲁棒性。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号