首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
A new search method over (x,y,&thetas;), called position-orientation masking is introduced. It is applied to vertices that are allowed to be separated into different bands of acuteness. Position-orientation masking yields exactly one &thetas; value for each (x,y) that it considers to be the location of a possible occurrence of an object. Detailed matching of edge segments is performed at only these candidate (x,y,&thetas;) to determine if objects actually do occur there. Template matching is accelerated dramatically since the candidates comprise only a small fraction of all (x,y,&thetas;). Position-orientation masking eliminates the need for exhaustive search when deriving the candidate (x,y,&thetas;). Search is guided by correlations between template vertices and distance transforms of image vertices. When a poor correlation is encountered at a particular position and orientation, nearby positions at that orientation and nearby orientations at that position are masked out. Position and orientation traversal are by quadrant and binary decomposition  相似文献   

3.
We describe an image representation that uniquely encodes the information in a gray-scale image, decouples the effects of illumination, reflectance, and angle of incidence, and is invariant, within a linear shift, to perspective, position, orientation, and size of arbitrary planar forms. Then we provide a theoretical basis for applying this representation to achieve invariant form recognition.  相似文献   

4.
This paper deals with a novel way for representing and computing image features encapsulated within different regions of scale-space. Employing a thermodynamical model for scale-space generation, the method derives features as those corresponding to “entropy rich” image regions where, within a given range of spatial scales, the entropy gradient remains constant. Different types of image features, defining regions of different information content, are accordingly encoded by such regions within different bands of spatial scale  相似文献   

5.
6.
The ability to quickly locate one or more instances of a model in a grey scale image is of importance to industry. The recognition/localization must be fast and accurate. In this paper we present an algorithm which incorporates normalized correlation into a pyramid image representation structure to perform fast recognition and localization. The algorithm employs an estimate of the gradient of the correlation surface to perform a steepest descent search. Test results are given detailing search time by target size, effect of rotation and scale changes on performance, and accuracy of the subpixel localization algorithm used in the algorithm. Finally, results are given for searches on real images with perspective distortion and the addition of Gaussian noise.  相似文献   

7.
In this paper, the first stage of studies concerning the computer analysis of hand X-ray digital images is described. The images are preprocessed and then skeletization of the fingers is carried out. Then, the interphapangeal and metacarpophalangeal joints are detected and contoured. Joint widths are also measured. The obtained results largely concur with those obtained by other authors—see Beier et al. [Segmentation of medical images combining local, regional, global, and hierarchical distances into a bottom-up region merging scheme, Proc. SPIE 5747 (2005) 546-555], Klooster et al. [Automatic quantification of osteoarthritis in hand radiographs: validation of a new method to measure joint space width, Osteoarthritis and Cartilage 16 (1) (2008) 18-25], Ogiela et al. [Image languages in intelligent radiological palm diagnostics, Pattern Recognition 39 (2006) 2157-2165] and Ogiela and Tadeusiewicz [Picture languages in automatic radiological palm interpretation, Int. J. Appl. Math. Comput. Sci. 15 (2) (2005) 305-312].  相似文献   

8.
A new approach for the template image matching is being presented. The method first converts the image into edges, then, the vital information of these edges has been presented as a set of vectors in a four dimensional hyper-space. A modified Radon Transform has been proposed to facilitate this vectorization process. All the above processing is being done offline for the main image of the area. The template image has also been vectorized in a same fashion in real time which is to be matched with the main image. A vector matching algorithm has been proposed to deliver match location with a very low computational cost. It works for a wide range of template scaling and noise conditions which were not there in the previous algorithms found in the literature.  相似文献   

9.
In image analysis, the concept of similarity has been widely explored and various measures of similarity, or of distance, have been proposed that yield a quantitative evaluation. There are cases, however, in which the evaluation of similarity should reproduce the judgment of a human observer based mainly on qualitative and, possibly, subjective appraisal of perceptual features. This process is best modeled as a cognitive process based on knowledge structures and inference strategies, able to incorporate the human reasoning mechanisms and to handle their inherent uncertainties. This articlea proposes a general strategy for similarity evaluation in image analysis considered as a cognitive process. A salient aspect is the use of fuzzy logic propositions to represent knowledge structures, and fuzzy reasoning to model inference mechanisms. Specific similarity evaluation procedures are presented that demonstrate how the same general strategy can be applied to different image analysis problems. © 1993 John Wily & Sons, Inc.  相似文献   

10.
Degraded image analysis: an invariant approach   总被引:8,自引:0,他引:8  
Analysis and interpretation of an image which was acquired by a nonideal imaging system is the key problem in many application areas. The observed image is usually corrupted by blurring, spatial degradations, and random noise. Classical methods like blind deconvolution try to estimate the blur parameters and to restore the image. We propose an alternative approach. We derive the features for image representation which are invariant with respect to blur regardless of the degradation PSF provided that it is centrally symmetric. As we prove in the paper, there exist two classes of such features: the first one in the spatial domain and the second one in the frequency domain. We also derive so-called combined invariants, which are invariant to composite geometric and blur degradations. Knowing these features, we can recognize objects in the degraded scene without any restoration  相似文献   

11.
12.
13.
A new approach to image segmentation is presented using a variation framework. Regarding the edge points as interpolating points and minimizing an energy functional to interpolate a smooth threshold surface it carries out the image segmentation. In order to preserve the edge information of the original image in the threshold surface, without unduly sharping the edge of the image, a non-convex energy functional is adopted. A relaxation algorithm with the property of global convergence, for solving the optimization problem, is proposed by introducing a binary energy. As a result the non-convex optimization problem is transformed into a series of convex optimization problems, and the problem of slow convergence or nonconvergence is solved. The presented method is also tested experimentally. Finally the method of determining the parameters in optimizing is also explored.  相似文献   

14.
Fast and accurate analysis of fluorescence in-situ hybridization images for signal counting will depend mainly upon two components: a classifier to discriminate between artifacts and valid signals of several fluorophores (colors), and well discriminating features to represent the signals. Our previous work (2001) has focused on the first component. To investigate the second component, we evaluate candidate feature sets by illustrating the probability density functions and scatter plots for the features. The analysis provides first insight into dependencies between features, indicates the relative importance of members of a feature set, and helps in identifying sources of potential classification errors. Class separability yielded by different feature subsets is evaluated using the accuracy of several neural network-based classification strategies, some of them hierarchical, as well as using a feature selection technique making use of a scatter criterion. Although applied to cytogenetics, the paper presents a comprehensive, unifying methodology of qualitative and quantitative evaluation of pattern feature representation essential for accurate image classification. This methodology is applicable to many other real-world pattern recognition problems  相似文献   

15.
Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.  相似文献   

16.
Dornaika  F.  Khoder  A.  Moujahid  A.  Khoder  W. 《Neural computing & applications》2022,34(19):16879-16895
Neural Computing and Applications - The performance of machine learning and pattern recognition algorithms generally depends on data representation. That is why, much of the current effort in...  相似文献   

17.

With the advancement of image acquisition devices and social networking services, a huge volume of image data is generated. Using different image and video processing applications, these image data are manipulated, and thus, original images get tampered. These tampered images are the prime source of spreading fake news, defaming the personalities and in some cases (when used as evidence) misleading the law bodies. Hence before relying totally on the image data, the authenticity of the image must be verified. Works of the literature are reported for the verification of the authenticity of an image based on noise inconsistency. However, these works suffer from limitations of confusion between edges and noise, post-processing operation for localization and need of prior knowledge about an image. To handle these limitations, a noise inconsistency-based technique has been presented here to detect and localize a false region in an image. This work consists of three major steps of pre-processing, noise estimation and post-processing. For the experimental purpose two, publicly available datasets are used. The result is discussed in terms of precision, recall, accuracy and f1-score on the pixel level. The result of the presented work is also compared with the recent state-of-the-art techniques. The average accuracy of the proposed work on datasets is 91.70%, which is highest among state-of-the-art techniques.

  相似文献   

18.
19.
Cheng-Hsing   《Pattern recognition》2008,41(8):2674-2683
Capacity and invisibility are two targets of the methods for information hiding. Because these two targets contradict each other, to hide large messages into the cover image and remain invisible is an interesting challenge. The simple least-significant-bit (LSB) substitution approach, which embeds secret messages into the LSB of pixels in cover images, usually embeds huge secret messages. After a large message is embedded, the quality of the stego-image will be significantly degraded. In this paper, a new LSB-based method, called the inverted pattern (IP) LSB substitution approach, is proposed to improve the quality of the stego-image. Each section of secret images is determined to be inverted or not inverted before it is embedded. The decisions are recorded by an IP for the purpose of extracting data and the pattern can be seen as a secret key or an extra data to be re-embedded. The experimental results show that our proposed method runs fast and has better results than that of previous works.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号