共查询到10条相似文献,搜索用时 156 毫秒
1.
E. Kavallieratou N. Fakotakis G. Kokkinakis 《International Journal on Document Analysis and Recognition》2002,4(4):226-242
In this paper, an integrated offline recognition system for unconstrained handwriting is presented. The proposed system consists
of seven main modules: skew angle estimation and correction, printed-handwritten text discrimination, line segmentation, slant
removing, word segmentation, and character segmentation and recognition, stemming from the implementation of already existing
algorithms as well as novel algorithms. This system has been tested on the NIST, IAM-DB, and GRUHD databases and has achieved
accuracy that varies from 65.6% to 100% depending on the database and the experiment. 相似文献
2.
《Information Systems》2000,25(4):309-322
Many real-time applications have very tight time constraints which couldn't be met by disk resident databases. For those applications, main memory database where entire database is stored in main memory is the proper choice. It has been shown that coarse-granule locking is better than fine-granule locking for main-memory databases. Coarse-granule locking makes it easy to extract data access patterns correctly from canned transactions of main memory real-time database systems. In this paper, we propose two real-time transaction scheduling algorithms — CCA-ALF (Cost Conscious Approach with Average Load Factor) and EDF-CR-ALF (Earliest Deadline First-Conditional Restart with ALF) — which use both static (e.g., deadline) and dynamic information (e.g., system load) for main memory databases by utilizing data access patterns of transactions. We compare the performance of those algorithms with CCA and EDF-HP which do not use system load information at all. Our simulations on main memory databases indicate that: i) CCA-ALF is better than EDF-HP, CCA, and EDF-CR-ALF in terms of miss percent and mean lateness, and ii) CCA-ALF adapts well to the changes in the system load. 相似文献
3.
Priority assignment in real-time active
databases 总被引:1,自引:0,他引:1
Rajendran M. Sivasankaran John A. Stankovic Don Towsley Bhaskar Purimetla Krithi Ramamritham 《The VLDB Journal The International Journal on Very Large Data Bases》1996,5(1):19-34
Active databases and real-time databases
have been important areas of research
in the recent past. It has been recognized
that many benefits can be gained by
integrating real-time and active database technologies.
However, not much work has been
done in the area of transaction processing in real-time active
databases. This paper deals with an important aspect
of transaction processing
in real-time active databases, namely the problem of
assigning priorities to
transactions. In these systems, time-constrained
transactions trigger other
transactions during their execution. We present three policies for assigning
priorities to parent, immediate and deferred transactions executing on a
multiprocessor system and then evaluate the policies through simulation. The
policies use different amounts of semantic information about transactions to
assign the priorities. The simulator has been validated against the results of
earlier published studies. We conducted experiments in three settings: a task
setting, a main memory database setting and a disk-resident database
setting.
Our results demonstrate that dynamically changing the priorities of
transactions, depending on their behavior (triggering rules), yields a
substantial improvement in the number of triggering transactions that meet
their deadline in all three settings.
Edited by Henry F. Korth and Amith Sheth.
Received November 1994 / Accepted March 20, 1995 相似文献
4.
动态GPS监控调度系统的设计与实现 总被引:6,自引:1,他引:5
设计并实现了动态GPS监控调度系统的开发,该系统以GPS,GIS以及GSM短信通信平台为基础,可实现在福州市GSM网覆盖范围内的所有移动目标如车辆等的动态监控与管理。它集车辆位置信息、运行状况信息的采集、存储、监控管理于一体。系统将道路基础数据与反映车辆运行状况的信息数据建立统一表现平台,实现车辆的运行状况在计算机中的可视化表示,使数据和分析的结果更为直观简洁。借助GIS平台,实现车辆管理与监控一体化。 相似文献
5.
Most of the research on text categorization has focused on classifying text documents into a set of categories with no structural relationships among them (flat classification). However, in many information repositories documents are organized in a hierarchy of categories to support a thematic search by browsing topics of interests. The consideration of the hierarchical relationship among categories opens several additional issues in the development of methods for automated document classification. Questions concern the representation of documents, the learning process, the classification process and the evaluation criteria of experimental results. They are systematically investigated in this paper, whose main contribution is a general hierarchical text categorization framework where the hierarchy of categories is involved in all phases of automated document classification, namely feature selection, learning and classification of a new document. An automated threshold determination method for classification scores is embedded in the proposed framework. It can be applied to any classifier that returns a degree of membership of a document to a category. In this work three learning methods are considered for the construction of document classifiers, namely centroid-based, naïve Bayes and SVM. The proposed framework has been implemented in the system WebClassIII and has been tested on three datasets (Yahoo, DMOZ, RCV1) which present a variety of situations in terms of hierarchical structure. Experimental results are reported and several conclusions are drawn on the comparison of the flat vs. the hierarchical approach as well as on the comparison of different hierarchical classifiers. The paper concludes with a review of related work and a discussion of previous findings vs. our findings. 相似文献
6.
Julie Yu‐Chih Liu 《国际智能系统杂志》2008,23(6):635-653
Although the problem of data integration in relational databases has been extensively studied, little work has addressed this problem in the context of fuzzy relational databases. Data integration is highly complex in fuzzy relational databases, partially because of the involvement of the resemblance relation. Inconsistent data redundancy may occur when the fuzzy databases to be integrated are associated with different resemblance relation on a given domain. This work presents the notions of consistency constraints, and applies them to the problem of data integration in several fuzzy data models. The constraints ensure that fuzzy databases with different resemblance relations agree to each other regarding data redundancy. In addition a solution for integrating inconsistent fuzzy databases with minimal information loss is provided. © 2008 Wiley Periodicals, Inc. 相似文献
7.
随着人机对话系统的不断发展,让计算机能够准确理解对话者的对话意图,并根据对话的历史信息对回复进行意图预测,对于人机对话系统有着十分重要的意义。已有研究重点关注根据对话文本和已有标签对回复进行意图预测,但是,在很多场景下回复可能并没有生成。因此,文中提出了一种结合回复生成的对话意图预测模型。在生成部分,使用Seq2Seq结构,根据对话历史信息生成文本,作为对话中未来回复的文本信息;在分类部分,利用LSTM模型,将生成的回复文本与已有的对话信息转变为子句级别的表示,并结合注意力机制突出同一轮次对话句与生成回复的联系。实验结果表明,所提出的模型相比简单基线模型取得了2.54%的F1-score提升,并且联合训练的方式有助于提升模型性能。 相似文献
8.
9.
Multidimensional information is pervasive in many computer applications including time series, spatial information, data warehousing, and visual data. While semistructured data or XML is becoming more and more popular for information integration and exchange, not much research work has been done in the design and implementation of semistructured database system to manage multidimensional information efficiently. In this paper, dimension operators have been defined based on a multidimensional logic which we call ML(). It can be used in applications such as multidimensional spreadsheets and multidimensional databases usually found in decision suport systems and data warehouses. Finally, a multidimensional XML database system has been prototyped and described in detail. Technologies such as XSL are used to transform or visualise data from different dimensions. 相似文献
10.
《Expert systems with applications》2014,41(2):406-411
Cross impact analysis (CIA) consists of a set of related methodologies that predict the occurrence probability of a specific event and that also predict the conditional probability of a first event given a second event. The conditional probability can be interpreted as the impact of the second event on the first. Most of the CIA methodologies are qualitative that means the occurrence and conditional probabilities are calculated based on estimations of human experts. In recent years, an increased number of quantitative methodologies can be seen that use a large number of data from databases and the internet. Nearly 80% of all data available in the internet are textual information and thus, knowledge structure based approaches on textual information for calculating the conditional probabilities are proposed in literature. In contrast to related methodologies, this work proposes a new quantitative CIA methodology to predict the conditional probability based on the semantic structure of given textual information. Latent semantic indexing is used to identify the hidden semantic patterns standing behind an event and to calculate the impact of the patterns on other semantic textual patterns representing a different event. This enables to calculate the conditional probabilities semantically. A case study shows that this semantic approach can be used to predict the conditional probability of a technology on a different technology. 相似文献