首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper presents a methodology for document processing, by separating text paragraphs from images. The methodology is based on the recognition of text characters and words for the efficient separation text paragraphs from images by keeping their relationships for a possible reconstruction of the original page. The text separation and extraction is based on a hierarchical framing process. The process starts with the framing of a single character, after its recognition, continues with the recognition and framing of a word, and ends with the framing of all text lines. The text line form a natural language text which requires analysis.  相似文献   

2.
根据多文种信息处理中双向文字所存在的问题,提出了一种面向信息处理、具有自描述能力的双向文字处理算法IBidi。该算法首先对字符流进行预处理,主要对数字等特殊的字符进行标注;然后分析字符流,添加各种定义好的标签,用于描述字符的特性,供信息处理系统使用;最后,IBidi利用一个重新排序算法输出处理结果。该算法在典型测试样本上正确率达到96.7%,比Unicode的双向文字处理算法高出约17个百分点。另外,在随机样本测试中,IBidi的正确率也比Unicode的双向文字处理算法高5%左右。  相似文献   

3.
Word searching in non-structural layout such as graphical documents is a difficult task due to arbitrary orientations of text words and the presence of graphical symbols. This paper presents an efficient approach for word searching in documents of non-structural layout using an efficient indexing and retrieval approach. The proposed indexing scheme stores spatial information of text characters of a document using a character spatial feature table (CSFT). The spatial feature of text component is derived from the neighbor component information. The character labeling of a multi-scaled and multi-oriented component is performed using support vector machines. For searching purpose, the positional information of characters is obtained from the query string by splitting it into possible combinations of character pairs. Each of these character pairs searches the position of corresponding text in document with the help of CSFT. Next, the searched text components are joined and formed into sequence by spatial information matching. String matching algorithm is performed to match the query word with the character pair sequence in documents. The experimental results are presented on two different datasets of graphical documents: maps dataset and seal/logo image dataset. The results show that the method is efficient to search query word from unconstrained document layouts of arbitrary orientation.  相似文献   

4.
In keyword spotting from handwritten documents by text query, the word similarity is usually computed by combining character similarities, which are desired to approximate the logarithm of the character probabilities. In this paper, we propose to directly estimate the posterior probability (also called confidence) of candidate characters based on the N-best paths from the candidate segmentation-recognition lattice. On evaluating the candidate segmentation-recognition paths by combining multiple contexts, the scores of the N-best paths are transformed to posterior probabilities using soft-max. The parameter of soft-max (confidence parameter) is estimated from the character confusion network, which is constructed by aligning different paths using a string matching algorithm. The posterior probability of a candidate character is the summation of the probabilities of the paths that pass through the candidate character. We compare the proposed posterior probability estimation method with some reference methods including the word confidence measure and the text line recognition method. Experimental results of keyword spotting on a large database CASIA-OLHWDB of unconstrained online Chinese handwriting demonstrate the effectiveness of the proposed method.  相似文献   

5.
关键词抽取技术是自然语言处理领域的一个研究热点。在目前的关键词抽取算法中,深度学习方法较少考虑到中文的特点,汉字粒度的信息利用不充分,中文短文本关键词的提取效果仍有较大的提升空间。为了改进短文本的关键词提取效果,针对论文摘要关键词自动抽取任务,提出了一种将双向长短时记忆神经网络(Bidirectional Long Shot-Term Memory,BiLSTM)与注意力机制(Attention)相结合的基于序列标注(Sequence Tagging)的关键词提取模型(Bidirectional Long Short-term Memory and Attention Mechanism Based on Sequence Tagging,BAST)。首先使用基于词语粒度的词向量和基于字粒度的字向量分别表示输入文本信息;然后,训练BAST模型,利用BiLSTM和注意力机制提取文本特征,并对每个单词的标签进行分类预测;最后使用字向量模型校正词向量模型的关键词抽取结果。实验结果表明,在8159条论文摘要数据上,BAST模型的F1值达到66.93%,比BiLSTM-CRF(Bidirectional Long Shoft-Term Memory and Conditional Random Field)算法提升了2.08%,较其他传统关键词抽取算法也有进一步的提高。该模型的创新之处在于结合了字向量和词向量模型的抽取结果,充分利用了中文文本信息的特征,可以有效提取短文本的关键词,提取效果得到了进一步的改进。  相似文献   

6.
7.
This paper describes a robust context integration model for on-line handwritten Japanese text recognition. Based on string class probability approximation, the proposed method evaluates the likelihood of candidate segmentation–recognition paths by combining the scores of character recognition, unary and binary geometric features, as well as linguistic context. The path evaluation criterion can flexibly combine the scores of various contexts and is insensitive to the variability in path length, and so, the optimal segmentation path with its string class can be effectively found by Viterbi search. Moreover, the model parameters are estimated by the genetic algorithm so as to optimize the holistic string recognition performance. In experiments on horizontal text lines extracted from the TUAT Kondate database, the proposed method achieves the segmentation rate of 0.9934 that corresponds to a f-measure and the character recognition rate of 92.80%.  相似文献   

8.
Parameter-free geometric document layout analysis   总被引:1,自引:0,他引:1  
Automatic transformation of paper documents into electronic documents requires geometric document layout analysis at the first stage. However, variations in character font sizes, text line spacing, and document layout structures have made it difficult to design a general-purpose document layout analysis algorithm for many years. The use of some parameters has therefore been unavoidable in previous methods. The authors propose a parameter-free method for segmenting the document images into maximal homogeneous regions and identifying them as texts, images, tables, and ruling lines. A pyramidal quadtree structure is constructed for multiscale analysis and a periodicity measure is suggested to find a periodical attribute of text regions for page segmentation. To obtain robust page segmentation results, a confirmation procedure using texture analysis is applied to only ambiguous regions. Based on the proposed periodicity measure, multiscale analysis, and confirmation procedure, we could develop a robust method for geometric document layout analysis independent of character font sizes, text line spacing, and document layout structures. The proposed method was experimented with the document database from the University of Washington and the MediaTeam Document Database. The results of these tests have shown that the proposed method provides more accurate results than previous ones  相似文献   

9.
Handwritten text recognition systems commonly combine character classification confidence scores and context models for evaluating candidate segmentation-recognition paths, and the classification confidence is usually optimized at character level. In this paper, we investigate into different confidence-learning methods for handwritten Chinese text recognition and propose a string-level confidence-learning method, which estimates confidence parameters by directly optimizing the performance of character string recognition. We first compare the performances of parametric (class-dependent and class-independent parameters) and nonparametric (isotonic regression) confidence-learning methods. Then, we propose two regularized confidence estimation methods and particularly, a string-level confidence-learning method under the minimum classification error criterion. In experiments of online handwritten Chinese text recognition, the string-level confidence-learning method is shown to effectively improve the string recognition performance. Using three character classifiers, the character correct rates are improved from 92.39, 90.24 and 88.69 % to 92.76, 90.91 and 89.93 %, respectively.  相似文献   

10.
11.
We describe a process of word recognition that has high tolerance for poor image quality, tunability to the lexical content of the documents to which it is applied, and high speed of operation. This process relies on the transformation of text images into character shape codes, and on special lexica that contain information on the shape of words. We rely on the structure of English and the high efficiency of mapping between shape codes and the characters in the words. Remaining ambiguity is reduced by template matching using exemplars derived from surrounding text, taking advantage of the local consistency of font, face and size as well as image quality. This paper describes the effects of lexical content, structure and processing on the performance of a word recognition engine. Word recognition performance is shown to be enhanced by the application of an appropriate lexicon. Recognition speed is shown to be essentially independent of the details of lexical content provided the intersection of the occurrences of words in the document and the lexicon is high. Word recognition accuracy is dependent on both intersection and specificity of the lexicon. Received May 1, 1998 / Revised October 20, 1998  相似文献   

12.
Biomedical event extraction is one of the most significant and challenging tasks in biome- dical text information extraction, which has attracted more attentions in recent years. The two most important subtasks in biomedical event extraction are trigger recognition and argument detection. Most of the preceding methods consider trigger recognition as a classification task but ignore the sentence-level tag information. Therefore, a sequence labeling model based on bidirectional long short-term memory (Bi-LSTM) and conditional random field (CRF) is constructed for trigger recognition, which separately uses the static pre-trained word embedding combined with character-level word representation and the dynamic contextual word representation based on the pre-trained language model as model inputs. Meanwhile, for the event argument detection task, a self-attention based multi-classification model is proposed to make full use of the entity and entity type features. The F1-scores of trigger recognition and overall event extraction are 81.65% and 60.04% respectively, and the experimental results show that the proposed method is effective for biomedical event extraction.  相似文献   

13.
一种基于字同现频率的汉语文本主题抽取方法   总被引:24,自引:0,他引:24  
主题抽取是文本自动处理的基础工作之一,而主题的抽取一直以分词或者抽词作为第1步.由于汉语词间缺少明显的间隔,因此分词和抽词的效果往往不够理想,从而在一定程度上影响了主题抽取的质量.提出以字为处理单位,基于字同现领率的汉语文本主题自动抽取的新方法.该方法速度快,适应多种文体类型,并完全避开了分词和抽词过程,可以广泛应用在主题句、主题段落等主题抽取的多个层面,而且同样适用于其他语言的文本主题抽取.主题句自动抽取实验表明,该方法抽取新闻文本主题句的正确率达到77.19%.汉语文本的主题抽取比较实验还表明,省略分词步骤并没有降低抽取算法的正确率.  相似文献   

14.
In this paper, we present a new text line detection method for handwritten documents. The proposed technique is based on a strategy that consists of three distinct steps. The first step includes image binarization and enhancement, connected component extraction, partitioning of the connected component domain into three spatial sub-domains and average character height estimation. In the second step, a block-based Hough transform is used for the detection of potential text lines while a third step is used to correct possible splitting, to detect text lines that the previous step did not reveal and, finally, to separate vertically connected characters and assign them to text lines. The performance evaluation of the proposed approach is based on a consistent and concrete evaluation methodology.  相似文献   

15.
基于N元汉字串模型的文本表示和实时分类的研究与实现   总被引:4,自引:0,他引:4  
该文提出了一种基于N元汉字串特征的文本向量空间表示模型,用这个表示模型实现了一个文本实时分类系统。对比使用词语做为特征的文本向量空间模型,这种新的模型由于使用快速的多关键词匹配技术,不使用分词等复杂计算,可以实现实时文本分类。由于N元汉字串的文本表示模型中的特征抽取中不需要使用词典分词,从而可以提取出一些非词的短语结构,在特殊的应用背景,如网络有害信息判别中,能自动提取某些更好的特征项。实验结果表明,使用简单的多关键词匹配和使用复杂的分词,对分类系统的效果影响是很小的。该文的研究表明N元汉字串特征和词特征的表示能力在分类问题上基本是相同的,但是N元汉字串特征的分类系统可以比分词系统的性能高出好几倍。该文还描述了使用这种模型的自动文本分类系统,包括分类系统的结构,特征提取,文本相似度计算公式,并给出了评估方法和实验结果。  相似文献   

16.
非限定性手写汉字串的分割与识别是当前字符识别领域中的一个难点问题.针对手写日期的特点,提出了整词识别和定长汉字串分割识别相结合的组合识别方法.整词识别将字符串作为一个整体进行识别,无需复杂的字符串分割过程.在定长汉字串分割过程中,首先通过识别来预测汉字串的长度,然后通过投影和轮廓分析确定候选分割线,最后通过识别选取最优分割路径.这两种分割识别方法通过规则进行组合,大大提高了系统的性能.在真实票据图像上的实验表明了该方法的有效性,分割识别正确率达到了93.3%.  相似文献   

17.
基于字串内部结合紧密度的汉语自动抽词实验研究   总被引:14,自引:7,他引:14  
自动抽词是文本信息处理中的重要课题之一。当前比较通行的解决策略是通过评估候选字串内部结合紧密度来判断该串成词与否。本文分别考察了九种常用统计量在汉语自动抽词中的表现,进而尝试将它们组合在一起,以期提高性能。为了达到尽可能好的组合效果,采用了遗传算法来自动调整组合权重。对二字词的自动抽词实验结果表明,这九种常用统计量中,互信息的抽词能力最强,F-measure可达54.77% ,而组合后的F-measure为55.47% ,仅比互信息提高了0.70% ,效果并不显著。我们的结论是: (1) 上述统计量并不具备良好的互补性; (2) 通常情况下,建议直接选用互信息进行自动抽词,简单有效。  相似文献   

18.
王寅同  郑豪  常合友  李朔 《控制与决策》2023,38(7):1825-1834
中文手写文本识别是模式识别领域中的研究热点问题之一,其存在字符类别数量多、书写风格差异大和训练数据集标记难等问题.针对上述问题,提出无切分无循环的残差注意网络结构用于端到端手写文本识别.首先,以ResNet-26为主体结构,使用深度可分离卷积提取有意义特征,残差注意门控模块提升文本图像中的关键区域的重要性;其次,采用批量双线性插值模型对输入表征进行拉伸-挤压,实现二维文本表征到一维文本行表征的文本行上采样;最后,以连接时序分类作为识别模型的损失函数,实现高层次抽取表征与字符序列标记的对应关系.在CASIA-HWDB2.x和ICDAR2013两个数据集上进行实验研究,结果表明,所提方法在没有任何字符或文本行的位置信息时能够有效地实现端到端手写文本识别,且优于现有的方法.  相似文献   

19.
20.
This paper presents a new technique of high accuracy to recognize both typewritten and handwritten English and Arabic texts without thinning. After segmenting the text into lines (horizontal segmentation) and the lines into words, it separates the word into its letters. Separating a text line (row) into words and a word into letters is performed by using the region growing technique (implicit segmentation) on the basis of three essential lines in a text row. This saves time as there is no need to skeletonize or to physically isolate letters from the tested word whilst the input data involves only the basic information—the scanned text. The baseline is detected, the word contour is defined and the word is implicitly segmented into its letters according to a novel algorithm described in the paper. The extracted letter with its dots is used as one unit in the system of recognition. It is resized into a 9 × 9 matrix following bilinear interpolation after applying a lowpass filter to reduce aliasing. Then the elements are scaled to the interval [0,1]. The resulting array is considered as the input to the designed neural network. For typewritten texts, three types of Arabic letter fonts are used—Arial, Arabic Transparent and Simplified Arabic. The results showed an average recognition success rate of 93% for Arabic typewriting. This segmentation approach has also found its application in handwritten text where words are classified with a relatively high recognition rate for both Arabic and English languages. The experiments were performed in MATLAB and have shown promising results that can be a good base for further analysis and considerations of Arabic and other cursive language text recognition as well as English handwritten texts. For English handwritten classification, a success rate of about 80% in average was achieved while for Arabic handwritten text, the algorithm performance was successful in about 90%. The recent results have shown increasing success for both Arabic and English texts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号