共查询到20条相似文献,搜索用时 15 毫秒
1.
A robust stroke extraction method for Chinese characters is essential to off-line character recognition systems, which depend on stroke structure analysis to function. This study presents a novel stroke extraction method based on a directional filtering technique for extracting reliable and undistorted stroke segments. First, a set of Gabor filters is used to break down an image of a character into different directional features. Next, a new iterative thresholding technique that minimizes the reconstruction error is proposed to recover stroke shape. Finally, a refinement process based on measuring the degree of stroke overlap is used to remove redundant stroke pieces. Experimental results show that the proposed method cannot only provide immunity against the junction-distortion and spurious-branch problems associated with a thinning-based process, but is also insensitive to shape deformation and noise. 相似文献
2.
Feature extraction from faces using deformable templates 总被引:62,自引:3,他引:62
Alan L. Yuille Peter W. Hallinan David S. Cohen 《International Journal of Computer Vision》1992,8(2):99-111
3.
A theory is presented for the computation of three-dimensional motion and structure from dynamic imagery, using only line correspondences. The traditional approach of corresponding microfeatures (interesting points-highlights, corners, high-curvature points, etc.) is reviewed and its shortcomings are discussed. Then, a theory is presented that describes a closed form solution to the motion and structure determination problem from line correspondences in three views. The theory is compared with previous ones that are based on nonlinear equations and iterative methods. 相似文献
4.
There is a large demand for more fashionable style Chinese characters in advertising, art designing and publishing markets. However, it becomes challenging to create a new font style for so many Chinese characters (over 10,000). To solve this problem, a comprehensive Chinese fonts generating scheme is proposed in this paper. Firstly, a decomposition database for stroke splitting and feature extraction is proposed. Secondly, stroke segmentation rules are defined based on splitting, merging and structural model, location definition and minimum feature extraction. Thirdly, a radical searching algorithm based on stroke splitting is presented. Finally, it is realized that the generated characters can be zoomed, rotated and moved. Experimental result shows that Chinese characters with a new style can be generated rapidly with the proposed scheme. The created characters fit the real ones well with a high fidelity of 96.4%. The usability tests are run and participants’ subjective report show that the performance from the generated characters is similar to the original characters in both recognizability test and style-consistency test. The fonts generating method is also reliable for the other stroke constructed block characters such as Japanese and Korean characters. 相似文献
5.
An efficient knowledge-based stroke extraction method for multi-font chinese characters 总被引:11,自引:0,他引:11
Feature extraction is the most important thing in pattern recognition. Whether it is good or not affects the recognition rate seriously. Selecting the strokes as the features to describe a Chinese character is indeed a powerful approach. But it also suffers from the difficulty of stroke extraction. In this paper, some knowledge about strokes is derived by studying the structure of Chinese characters. This knowledge is then applied to help extract the strokes. The method cannot only heuristically extract strokes but can also heuristically eliminate noises including those added to strokes for artistic sake. Moreover, this method does not use any preprocessing like thinning or other transformations, so its extraction speed is very fast. 相似文献
6.
This paper presents a new method for unsupervised urban area extraction from SAR imagery using two different GMRF models.
One model is the T-based GMRF model proposed by Xavier Descombes specially for acquiring urban area in panchromatic SPOT imagery. When it is
used for urban area extraction from SAR imagery, some missing detection occurs. The other model is the conventional GMRF model
that requires training samples for urban area extraction. When it is used for SAR imagery, the extraction result includes
all urban areas and some false detection. Three steps are made up in our method. First, we adopt a threshold for the T-based GMRF model parameter T to acquire the result of urban area extraction. Then, taking the result as training samples, we estimate the conventional
GMRF model parameters and acquire a new result of urban area extraction. Finally, we fuse the two results above using a region-growing
algorithm to form the final accurate urban area extraction. Experimental results show that the proposed unsupervised approach
can obtain accurate urban area delineation.
The text was submitted by the authors in English.
Yang Yong. Born in 1978. Now a postgraduate in the Department of Communication and Information Systems, School of Electronic Information,
Wuhan University. The research direction is Image Processing. Scientific interestsare SAR image segmentation and classification
with the Markov random field approach.
Hong Sun. Born in 1954. Graduated from the Huazhong University of Science and Technology of Electrical Engineering in 1982. Received
a Doctoral degree in 1995. Author of Advanced Digital Signal Processing, which is widely used as a textbook for graduated students in China. Scientific interests include statistical signal processing,
image analysis, and communication signal processing.
Yongfeng Cao. Born in 1976. Graduated from Wuhan University of China in 1999. Assistant and doctoral candidate in the laboratory of Signal
Processing and Modern Communication, School of Electronic Information, Wuhan University, China. Scientific interests include
Markov random fields, Watershed transformation, and SAR image interpretation. 相似文献
7.
Pralay Pal Author Vitae A.M. Tigga Author Vitae Author Vitae 《Computer aided design》2005,37(5):545-558
Syntactic recognition, Graph based method, expert systems and knowledge-based approach are the common feature recognition techniques available today. This work discusses a relatively newer concept of introduction of Genetic Algorithm for Features Recognition (GAFR) from large CAD databases, which is significant in view of the growing product complexity across all manufacturing domains. Genetic Algorithm is applied in a random search process in the CAD data using population initialisation; offspring feature creation via crossover, evolution and extinction of the offspring sub-solutions and finally selection of the best alternatives. This method is cheaper than traditional hybrid and heuristics based direct search approaches. Case study is presented with simulation results. 相似文献
8.
针对合成孔径雷达(synthetic aperture radar,SAR)图像中道路边缘的特点,提出了一种基于多条件加权法的高分辨率SAR图像道路提取算法。该算法使用FROST滤波抑制相干斑噪声,并使用OSTU算法对高分辨率SAR图像进行二值化,对二值化后的SAR图像进行膨胀与腐蚀,再使用五邻居边缘检测器与多条件加权法提取道路的一个边缘,最终使用桥连接模式提取出完整的道路边缘。实验结果表明,该算法可以消除噪声,消除障碍物的干扰,有效的提取道路边缘。 相似文献
9.
Rajeev Rajan Manaswi Misra Hema A. Murthy 《International Journal of Speech Technology》2017,20(1):185-204
Modified group delay based algorithms for estimation of melodic pitch sequences from heterphonic/polyphonic music are discussed in this paper. Two different variants of the modified group delay function are proposed, namely, (a) system based—MODGD (Direct) and (b) source based—MODGD (Source). In (a) the standard modified group delay function (MODGDF) is used to estimate prominent melodic pitch ((f_0)), which appears like a low frequency formant in the MODGDF spectrum. In (b), the power spectrum of the signal is first flattened to emphasise the source. The flattened power spectrum behaves like a sinusoid in noise, the frequency of the sinusoid being related to the pitch frequency. The modified group delay function of this signal produces peaks at (T_0), (2T_0, ldots ,) where (T_0=frac{1}{f_0}). Continuity constraints in a dynamic programming framework are imposed across frames to reduce octave errors. Sudden changes in pitch are accommodated by changing the frame size dynamically using a multi-resolution framework. The performance of the proposed systems was evaluated on four datasets: ADC-2004, LabROSA, MIREX-2008 and Carnatic music dataset. The performance of the proposed approaches demonstrate the potential of the group delay based methods for melody extraction. 相似文献
10.
11.
Extracting perceptually meaningful strokes plays an essential role in modeling structures of handwritten Chinese characters for accurate character recognition. This paper proposes a cascade Markov random field (MRF) model that combines both bottom-up (BU) and top-down (TD) processes for stroke extraction. In the low-level stroke segmentation process, we use a BU MRF model with smoothness prior to segment the character skeleton into directional substrokes based on self-organization of pixel-based directional features. In the high-level stroke extraction process, the segmented substrokes are sent to a TD MRF-based character model that, in turn, feeds back to guide the merging of corresponding substrokes to produce reliable candidate strokes for character recognition. The merit of the cascade MRF model is due to its ability to encode the local statistical dependencies of neighboring stroke components as well as prior knowledge of Chinese character structures. Encouraging stroke extraction and character recognition results confirm the effectiveness of our method, which integrates both BU/TD vision processing streams within the unified MRF framework. 相似文献
12.
13.
A new method is proposed to extract urban areas from SAR imagery using two different Gaussian Markov Random Field (GMRF) models.
Firstly, by making an initial segmentation by a watershed algorithm, we adopt a particular GMRF model proposed by Descombes
et al. (the model is called RGMRF model, distinguished from the conventional GMRF model) to acquire urban areas. In the first model
a part of the urban areas from the SAR image is extracted with some missing detection. Then, taking the first result as a
training sample, we use the conventional GMRF model to redo the extraction. In the second model a larger area is detected
including all urban areas with some false detection. Finally, we fuse the two results using a region-growing algorithm to
form the final detected urban area. Experimental results show that the proposed method can obtain accurate urban areas delineation.
The text was submitted by the authors in English. 相似文献
14.
提出了一种基于数学形态学方向笔画提取的希尔伯特一黄变换(HHT)方法,对脱机手写体汉字进行特征提取.通过对HHT得到的Hilbert谱和边际谱的分析,构成4维结构特征和32维统计特征,并对其进行特征融合后得到36维特征向量作为最终的识别特征.试验结果表明其识别率比单独使用Gabor变换、小波变换等方法的识别率高.在识别速度上虽然比矩变换、数学形态学等方法慢,但是比Gabor变换的速度有明显提高,比多方法特征融合的方法在速度上有一定提高.该研究表明HHT作为一种新的信号分析方法,可以被有效地运用于提取汉字图像的特征. 相似文献
15.
Automatic recognition of handwritten alphanumeric characters is designed by making use of topological feature extraction and multi-level decision making. By properly specifying a set of easily detectable topological features, it is possible to convert automatically the handwritten characters into stylized forms and to classify them into primary categories. Each category contains one or several character pattern classes with similar topological configurations. Final recognition is accomplished by a secondary stage which performs local analysis on the characters in each primary category. The recognition system consists of two stages, global recognition followed by local recognition. Automatic character stylization results in pattern clustering which simplifies the classification tasks considerably, while allowing a high degree of generality in the acceptable writing format. Simulation of this scheme on a digital computer has shown only 2% misrecognition.This work was supported in part by the Office of Naval Research and the National Science Foundation. 相似文献
16.
Miguel L. Bote-LorenzoAuthor Vitae Yannis A. DimitriadisAuthor VitaeEduardo Gómez-SánchezAuthor Vitae 《Pattern recognition》2003,36(7):1605-1617
The novel prototype extraction method presented in this paper aims to advancing in the comprehension of handwriting generation and improving on-line recognition systems. The extraction process is performed in two stages. First, using Fuzzy ARTMAP we group character instances according to classification criteria. Then, an algorithm refines these groups and computes the prototypes. Experimental results on the UNIPEN international database show that the proposed system is able to extract a low number of prototypes that are easily recognizable. In addition, the extraction method is able to condense knowledge that can be successfully used to initialize an LVQ-based recognizer, achieving an average recognition rate of 90.15%, comparable to that reached by human readers. 相似文献
17.
Smooth surface extraction using partial differential equations (PDEs) is a well-known and widely used technique for visualizing volume data. Existing approaches operate on gridded data and mainly on regular structured grids. When considering unstructured point-based volume data where sample points do not form regular patterns nor are they connected in any form, one would typically resample the data over a grid prior to applying the known PDE-based methods. We propose an approach that directly extracts smooth surfaces from unstructured point-based volume data without prior resampling or mesh generation. When operating on unstructured data one needs to quickly derive neighborhood information. The respective information is retrieved by partitioning the 3D domain into cells using a kd-tree and operating on its cells. We exploit neighborhood information to estimate gradients and mean curvature at every sample point using a four-dimensional least-squares fitting approach. Gradients and mean curvature are required for applying the chosen PDE-based method that combines hyperbolic advection to an isovalue of a given scalar field and mean curvature flow. Since we are using an explicit time-integration scheme, time steps and neighbor locations are bounded to ensure convergence of the process. To avoid small global time steps, we use asynchronous local integration. We extract the surface by successively fitting a smooth auxiliary function to the data set. This auxiliary function is initialized as a signed distance function. For each sample and for every time step we compute the respective gradient, the mean curvature, and a stable time step. With these informations the auxiliary function is manipulated using an explicit Euler time integration. The process successively continues with the next sample point in time. If the norm of the auxiliary function gradient in a sample exceeds a given threshold at some time, the auxiliary function is reinitialized to a signed distance function. After convergence of the evolution, the resulting smooth surface is obtained by extracting the zero isosurface from the auxiliary function using direct isosurface extraction from unstructured point-based volume data and rendering the extracted surface using point-based rendering methods. 相似文献
18.
The use of visual information from lip movements can improve the accuracy and robustness of a speech recognition system. In this paper, a region-based lip contour extraction algorithm based on deformable model is proposed. The algorithm employs a stochastic cost function to partition a color lip image into lip and non-lip regions such that the joint probability of the two regions is maximized. Given a discrete probability map generated by spatial fuzzy clustering, we show how the optimization of the cost function can be done in the continuous setting. The region-based approach makes the algorithm more tolerant to noise and artifacts in the image. It also allows larger region of attraction, thus making the algorithm less sensitive to initial parameter settings. The algorithm works on unadorned lips and accurate extraction of lip contour is possible. 相似文献
19.
Linsen L Van Long T Rosenthal P Rosswog S 《IEEE transactions on visualization and computer graphics》2008,14(6):1483-1490
Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are Smoothed Particle Hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations. 相似文献
20.
This paper deals with a modified combined wavelet transform technique that has been developed to analyse multilead electrocardiogram signals for cardiac disease diagnostics. Two wavelets have been used, i.e. a quadratic spline wavelet (QSWT) for QRS detection and the Daubechies six coefficient (DU6) wavelet for P and T detection. After detecting the fundamental electrocardiogram waves, the desired electrocardiogram parameters for disease diagnostics are extracted. The software has been validated by extensive testing using the CSE DS-3 database and the MIT/BIH database. A procedure has been evolved using electrocardiogram parameters with a point scoring system for diagnosis of cardiac diseases, namely tachycardia, bradycardia left ventricular hypertrophy, and right ventricular hypertrophy. As the diagnostic results are not yet disclosed by the CSE group, two alternate diagnostic criteria have been used to check the diagnostic authenticity of the test results. The consistency and reliability of the identified and measured parameters were confirmed when both the diagnostic criteria gave the same results 相似文献