共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
Internet上大多数信息的表现形式为文本,如何在浩瀚的文本信息中挖掘到潜在的知识是一个有待解决的问题。文本挖掘的目的是从不同格式的文本中发现有用的知识,这是一个分析文本并从中抽取特定信息的过程。系统地介绍了文本挖掘的含义,并对文本挖掘过程的各个方面进行了进一步的探讨,包括文本特征的建立、特征的提取技术、文本的分类、文本的聚类等相关技术。同时提出了一种基于Web的文本信息挖掘的模型,将以高校BBS论坛为信息源,利用高级语言开发技术来构建一个自动的文本分类器。 相似文献
4.
基于Web的文本挖掘技术研究 总被引:2,自引:0,他引:2
许高建 《计算机技术与发展》2007,17(6):187-190
Internet上大多数信息的表现形式为文本,如何在浩瀚的文本信息中挖掘到潜在的知识是一个有待解决的问题。文本挖掘的目的是从不同格式的文本中发现有用的知识,这是一个分析文本并从中抽取特定信息的过程。系统地介绍了文本挖掘的含义,并对文本挖掘过程的各个方面进行了进一步的探讨,包括文本特征的建立、特征的提取技术、文本的分类、文本的聚类等相关技术。同时提出了一种基于Web的文本信息挖掘的模型,将以高校BBS论坛为信息源,利用高级语言开发技术来构建一个自动的文本分类器。 相似文献
5.
文本分类是将一个待分类的集合映射到预先确定好的文本信息集合中去的过程.在国外,英文分类技术研究已经很成熟,由于中文构词比英文分类复杂,分类技术和理论还需进一步研究.研究中文文本分类在信息处理和用户对信息的获取方面至关重要.文本分类的过程比较复杂,主要研究文本分类中的文本预处理、文本表示、特征提取与加权和分类算法等关键技术. 相似文献
6.
简要介绍Web挖掘的概念、分类及其功能,重点分析了Web文本挖掘的方法,包括文本的特征表示与抽取、文本的分类与聚类等。最后对Web文本挖掘的应用领域作了展望。 相似文献
7.
随着Internet的深入发展及普及应用,网络中可获取的大部分文本信息由来自各种数据源的文档组成.由于电子形式的文本信息飞速增涨,可以获知的文本信息已成海量之势,文本挖掘已经成为信息领域的研究热点,快速得到目标文本成为互联网发展的瓶颈.在动态聚类方法和基于特征属性分类法的基础上提出基于混合模糊聚类理论的文本数据分类系统新模型,在模型基础上探究了一种模糊聚类仿真算法,通过实验验证算法能有效提高文本分类效率及文本分类准确率,从而在实际网络文本挖掘应用中快速得到目标文本,实现因特网文本智能挖掘. 相似文献
8.
通过分析网页的特点及因特网用户感兴趣的查询信息,提出了一种基于机器学习的、独立于语种的文本分类模型.这一模型的算法主要利用字间的相关信息、词频、页面的标记信息以及对用户的查询信息的浅层语义分析,提取网页特征,并计算可调的词频加权参数和增加特征词的可分性信息,然后通过本类和非本类训练,建立预定义类的特征向量空间,进一步对文本进行分类.这种分类方法在对于相似文本分类中具有明显的优势. 相似文献
9.
Web文本聚类算法的分析比较 总被引:2,自引:0,他引:2
随着计算机网络的发展,各种文本资源以惊人的速度增长,导致信息搜寻困难和信息利用率低下。而快速高质量的Web文本聚类技术可以满足用户方便快捷地从互联网获得所需要的信息资源。文章对Web文本聚类如网页采集、去噪、分词、特征表示等关键技术进行研究,对常用的Web文本聚类算法进行了分析比较,所给出的分析比较结果对文本聚类算法的应用有现实意义。 相似文献
10.
11.
Automatic text segmentation and text recognition for video indexing 总被引:13,自引:0,他引:13
Efficient indexing and retrieval of digital video is an important function of video databases. One powerful index for retrieval
is the text appearing in them. It enables content-based browsing. We present our new methods for automatic segmentation of
text in digital videos. The algorithms we propose make use of typical characteristics of text in videos in order to enable
and enhance segmentation performance. The unique features of our approach are the tracking of characters and words over their
complete duration of occurrence in a video and the integration of the multiple bitmaps of a character over time into a single
bitmap. The output of the text segmentation step is then directly passed to a standard OCR software package in order to translate
the segmented text into ASCII. Also, a straightforward indexing and retrieval scheme is introduced. It is used in the experiments
to demonstrate that the proposed text segmentation algorithms together with existing text recognition algorithms are suitable
for indexing and retrieval of relevant video sequences in and from a video database. Our experimental results are very encouraging
and suggest that these algorithms can be used in video retrieval applications as well as to recognize higher level semantics
in videos. 相似文献
12.
Wai Lam Ruiz M. Srinivasan P. 《Knowledge and Data Engineering, IEEE Transactions on》1999,11(6):865-879
We develop an automatic text categorization approach and investigate its application to text retrieval. The categorization approach is derived from a combination of a learning paradigm known as instance-based learning and an advanced document retrieval technique known as retrieval feedback. We demonstrate the effectiveness of our categorization approach using two real-world document collections from the MEDLINE database. Next, we investigate the application of automatic categorization to text retrieval. Our experiments clearly indicate that automatic categorization improves the retrieval performance compared with no categorization. We also demonstrate that the retrieval performance using automatic categorization achieves the same retrieval quality as the performance using manual categorization. Furthermore, detailed analysis of the retrieval performance on each individual test query is provided 相似文献
13.
14.
类别的中心和边界是类别的重要特征.利用训练样本的中心和边界作为分类准则,提出了一种基于边界可信度相似的快速文本分类算法。通过类别边界可信度调整文本与类别的相似性,克服了数据集类别间样本分布不均衡和类别中样本密度不均的缺点,提高了分类性能。实验结果表明该算法提高了文本分类的效果,显示出了较好的鲁棒性,并显著提高了文本分类效率。 相似文献
15.
一种基于反向文本频率互信息的文本挖掘算法研究 总被引:1,自引:0,他引:1
针对传统的文本分类算法存在着各特征词对分类结果的影响相同,分类准确率较低,同时造成了算法时间复杂度的增加,在分析了文本分类系统的一般模型,以及在应用了互信息量的特征提取方法提取特征项的基础上,提出一种基于反向文本频率互信息熵文本分类算法。该算法首先采用基于向量空间模型(vector spacemodel,VSM)对文本样本向量进行特征提取;然后对文本信息提取关键词集,筛选文本中的关键词,采用互信息来表示并计算词汇与文档分类相关度;最后计算关键词在文档中的权重。实验结果表明了提出的改进算法与传统的分类算法相比,具有较高的运算速度和较强的非线性映射能力,在收敛速度和准确程度上也有更好的分类效果。 相似文献
16.
Digital modes of editing ask us to re-examine the past centuryof editorial theory and to situate emerging editorial approacheswithin this history. Using the computer as a new textual mediumhas brought about a renewed interest in the conditions for representation.This article concerns itself with how books and computers, respectively,represent texts, and how critical editing mediates or organizesthose representations. It was written in 1997 as a criticalresponse to J.J. McGann's essay The Rationale of Hypertext. 相似文献
17.
针对视频中文本信息在视频序列和视频索引中的重要性,本文提出了一种基于文字混合特征的文本定位算法.该算法首先对视频序列中每隔25帧的单帧图像进行边缘检测和投影处理来提取文本块,然后用支持向量基进行筛选,排除非文本块的干扰,最后利用视频序列中相邻帧之间的相关性来搜索剩余帧中的文本块.本文的算法在提高检测速度的同时保证了较高的检测准确度. 相似文献
18.
It is well known that the classification effectiveness of the text categorization system is not simply a matter of learning algorithms. Text representation factors are also at work. This paper will consider the ways in which the effectiveness of text classifiers is linked to the five text representation factors: “stop words removal”, “word stemming”, “indexing”, “weighting”, and “normalization”. Statistical analyses of experimental results show that performing “normalization” can always promote effectiveness of text classifiers significantly. The effects of the other factors are not as great as expected. Contradictory to common sense, a simple binary indexing method can sometimes be helpful for text categorization. 相似文献
19.
Alistair Moffat 《Software》1989,19(2):185-198
The development of efficient algorithms to support arithmetic coding has meant that powerful models of text can now be used for data compression. Here the implementation of models based on recognizing and recording words is considered. Move-to-the-front and several variable-order Markov models have been tested with a number of different data structures, and first the decisions that went into the implementations are discussed and then experimental results are given that show English text being represented in under 2-2 bits per character. Moreover the programs run at speeds comparable to other compression techniques, and are suited for practical use. 相似文献
20.
Philippe Duchastel 《Computers & Education》1986,10(4)
The problems involved in accessing the meaning of information in information systems are explored through a contrast between two information technologies, the book and the computer. A distinction between format-structured information and semantically-structured information leads to the necessity to further distinguish between information and knowledge structures, with implications for the information technologies to seek (through access devices, text differentiation, and user-control) an adaptive match between information presentation and the user's own knowledge structures. The evolution of CAI systems toward intelligent tutoring systems is seen as the direction through which text access problems may be alleviated. 相似文献