共查询到20条相似文献,搜索用时 0 毫秒
1.
《Interacting with computers》1996,8(1):51-68
Manuals and interactive help are tedious to provide, difficult to maintain, and difficult to ensure that they remain correct, even for simple systems. The result is a loss in product quality, felt particularly by users and designers committed to long-term product development.The paper shows that it is possible to systematically put a system specification and its documentation into exact correspondence. It follows that much previously manual work can be done automatically — and with considerable advantages, including guaranteed correctness and completeness, as well as supporting powerful new features such as intelligent adaptive assistance. This paper shows how interactive assistance can be provided to answer ‘how to?’, ‘why not?’ and other questions. 相似文献
2.
A symbolic approach for content-based information filtering 总被引:2,自引:0,他引:2
Byron L.D. Bezerra 《Information Processing Letters》2004,92(1):45-52
3.
4.
针对基于内容邮件过滤器的攻击及过滤改进 总被引:1,自引:0,他引:1
基于内容的过滤技术是反垃圾邮件技术中最有效的方法,但是垃圾邮件发送者千方百计利用各种方法对基于内容的垃圾邮件过滤器进行攻击,严重影响了过滤器的正确率和健壮性.在介绍主要基于内容的垃圾邮件过滤技术基础上,分析了针对基于内容垃圾邮件过滤器的常用攻击方法,并提出了相应的过滤改进技术.同时,针对单词沙拉攻击,在几种过滤器上进行了模拟攻击实验.最后分析了垃圾邮件技术的发展趋势和未来反垃圾邮件技术的主要改进方法. 相似文献
5.
6.
7.
A film recommender agent expands and fine-tunes collaborative-filtering results according to filtered content elements - namely, actors, directors, and genres. This approach supports recommendations for newly released, previously unrated titles. Directing users to relevant content is increasingly important in today's society with its ever-growing information mass. To this end, recommender systems have become a significant component of e-commerce systems and an interesting application domain for intelligent agent technology. 相似文献
8.
Buffer overflows cause serious problems in various categories of software systems. In critical systems, such as health-care, nuclear or aerospace software applications, a buffer overflow may cause severe threats to humans or severe economic losses. If they occur in network or security applications, they can be exploited to gain administrator privileges, perform system attacks, access unauthorized data, or misuse the system. This paper proposes a combination of genetic algorithms, linear programming, evolutionary testing, and static and dynamic information to detect buffer overflows. The newly proposed test input generation process avoids the need for human intervention to define and tune genetic algorithm weights and therefore it becomes completely automated. The process that guides the genetic search towards the detection of buffer overflow relies on a fitness function that takes into account static and dynamic information. Reported results of our case studies, consisting of two sets of open-source programs show that the new process and fitness function outperform previously published approaches. 相似文献
9.
Clustering of related or similar objects has long been regarded as a potentially useful contribution of helping users to navigate an information space such as a document collection. Many clustering algorithms and techniques have been developed and implemented but as the sizes of document collections have grown these techniques have not been scaled to large collections because of their computational overhead. To solve this problem, the proposed system concentrates on an interactive text clustering methodology, probability based topic oriented and semi-supervised document clustering. Recently, as web and various documents contain both text and large number of images, the proposed system concentrates on content-based image retrieval (CBIR) for image clustering to give additional effect to the document clustering approach. It suggests two kinds of indexing keys, major colour sets (MCS) and distribution block signature (DBS) to prune away the irrelevant images to given query image. Major colour sets are related with colour information while distribution block signatures are related with spatial information. After successively applying these filters to a large database, only small amount of high potential candidates that are somewhat similar to that of query image are identified. Then, the system uses quad modelling method (QM) to set the initial weight of two-dimensional cells in query image according to each major colour and retrieve more similar images through similarity association function associated with the weights. The proposed system evaluates the system efficiency by implementing and testing the clustering results with Dbscan and K-means clustering algorithms. Experiment shows that the proposed document clustering algorithm performs with an average efficiency of 94.4% for various document categories. 相似文献
10.
THOMAS R. GRUBER SUNIL VEMURI JAMES RICE 《International journal of human-computer studies》1997,46(6):687-706
Virtual documents are hypermedia documents that are generated on demand in response to reader input. This paper describes a virtual document application that generates natural language explanations about the structure and behavior of electromechanical systems. The application, called DME, structures the interaction with the reader as a question–answer dialog. Each page of the hyperdocument is the answer to a question, and each link on each page is a follow-up question that leads to another answer. DME is amodel-basedvirtual document generator; unlike conventional database front-ends that provide views onto data, DME dynamically constructs the document's content (i.e. coherent explanations in English) from underlying mathematical and symbolic models. DME-based virtual documents have been running on the WWW since late 1993. They are used to document engineered systems in support of collaborative design and simulation-based training.In this paper we describe and demonstrate the DME application (with examples that run), and describe how it generates virtual documents on the web. We discuss the impact that model-based virtual documentation can have on the way technical documentation is authored and delivered. 相似文献
11.
12.
Daewook Lee Joonho Kwon Weidong Yang Hyoseop Shin Jae-min Kwak Sukho Lee 《Journal of Intelligent Manufacturing》2009,20(3):273-282
The XML stream filtering is gaining widespread attention from the research community in recent years. There have been many
efforts to improve the performance of the XML filtering system by utilizing XML schema information. In this paper, we design
and implement an XML stream filtering system, SFilter, which uses DTD or XML schema information for improving the performance.
We propose the simplification and two kinds of optimization, one is static and the other is dynamic optimization. The Simplification
and static optimization transform the XPath queries to make automata as an index structure for the filtering. The dynamic
optimization are done in runtime at the filtering time. We developed five kinds of static optimization and two kinds of dynamic
optimization. We present the novel filtering algorithm for the resulting transformed XPath queries and runtime optimizing.
The experimental result shows that our system filters the XML streams efficiently. 相似文献
13.
14.
Although some advocate their elimination, documents are the preferred and most effective way to communicate information in every engineering discipline. Documents (whether they be hard copy or electronic) are the evidence that engineering tasks have been performed. There are many commercial documentation packages that can help you define a document's format and structure. Which one you choose will determine what capabilities are available and the document's underlying representation. Current approaches for automating document generation are based on extending the facilities of software development tools and document publishing systems. Although these approaches provide some automation, they are still labor intensive. Today's approaches can be categorized as either push or pull 相似文献
15.
Wei Su Zhongping Sun Ruofei Zhong Menglin Li Jingguo Zhu 《International journal of remote sensing》2013,34(14):3616-3635
Recent advances in laser scanning hardware have allowed rapid generation of high-resolution digital terrain models (DTMs) for large areas. However, the automatic discrimination of ground and non-ground light detection and ranging (lidar) points in areas covered by densely packed buildings or dense vegetation is difficult. In this paper, we introduce a new hierarchical moving curve-fitting filter algorithm that is designed to automatically and rapidly filter lidar data to permit automatic DTM generation. This algorithm is based on fitting a second-degree polynomial surface using flexible tiles of moving blocks and an adaptive threshold. The initial tile size is determined by the size of the largest building in the study area. Based on an adaptive threshold, non-ground points and ground points are classified and labelled step by step. In addition, we used a multi-scale weighted interpolation method to estimate the bare-earth elevation for non-ground points and obtain a recovered terrain model. Our experiments in four study areas showed that the new filtering method can separate ground and non-ground points in both urban areas and those covered by dense vegetation. The filter error ranged from 4.08% to 9.40% for Type I errors, from 2.48% to 7.63% for Type II errors, and from 5.01% to 7.40% for total errors. These errors are lower than those of triangulated irregular network filter algorithms. 相似文献
16.
面向元数据流,提出有效评测用户订阅的方法.设计了索引结构对订阅进行分组索引,消除了一个订阅因为包含多个谓词而造成的多次索引、计数和比较;设计了新的基于分组的过滤算法,该算法通过缓存谓词匹配结果使得谓词匹配结果得以在订阅过滤过程中传播,取得了很高的过滤性能.实验结果表明,该系统可以有效地处理达上百万订阅的负载量,实验中引进提取词干和消除停用词,极大提高系统的查全率和精度. 相似文献
17.
In this paper a content-based image retrieval method that can search large image databases efficiently by color, texture, and shape content is proposed. Quantized RGB histograms and the dominant triple (hue, saturation, and value), which are extracted from quantized HSV joint histogram in the local image region, are used for representing global/local color information in the image. Entropy and maximum entry from co-occurrence matrices are used for texture information and edge angle histogram is used for representing shape information. Relevance feedback approach, which has coupled proposed features, is used for obtaining better retrieval accuracy. A new indexing method that supports fast retrieval in large image databases is also presented. Tree structures constructed by k-means algorithm, along with the idea of triangle inequality, eliminate candidate images for similarity calculation between query image and each database image. We find that the proposed method reduces calculation up to average 92.2 percent of the images from direct comparison. 相似文献
18.
In this paper we present a robust information integration approach to identifying images of persons in large collections such as the Web. The underlying system relies on combining content analysis, which involves face detection and recognition, with context analysis, which involves extraction of text or HTML features. Two aspects are explored to test the robustness of this approach: sensitivity of the retrieval performance to the context analysis parameters and automatic construction of a facial image database via automatic pseudofeedback. For the sensitivity testing, we reevaluate system performance while varying context analysis parameters. This is compared with a learning approach where association rules among textual feature values and image relevance are learned via the CN2 algorithm. A face database is constructed by clustering after an initial retrieval relying on face detection and context analysis alone. Experimental results indicate that the approach is robust for identifying and indexing person images.Y. Alp Aslandogan: Correspondence to: 相似文献
19.
Duen-Ren Liu Chin-Hui Lai Hsuan Chiu 《International journal of human-computer studies》2011,69(9):587-601
Collaborative filtering (CF) recommender systems have emerged in various applications to support item recommendation, which solve the information-overload problem by suggesting items of interest to users. Recently, trust-based recommender systems have incorporated the trustworthiness of users into CF techniques to improve the quality of recommendation. They propose trust computation models to derive the trust values based on users' past ratings on items. A user is more trustworthy if s/he has contributed more accurate predictions than other users. Nevertheless, conventional trust-based CF methods do not address the issue of deriving the trust values based on users' various information needs on items over time. In knowledge-intensive environments, users usually have various information needs in accessing required documents over time, which forms a sequence of documents ordered according to their access time. We propose a sequence-based trust model to derive the trust values based on users' sequences of ratings on documents. The model considers two factors – time factor and document similarity – in computing the trustworthiness of users. The proposed model enhanced with the similarity of user profiles is incorporated into a standard collaborative filtering method to discover trustworthy neighbors for making predictions. The experiment result shows that the proposed model can improve the prediction accuracy of CF method in comparison with other trust-based recommender systems. 相似文献
20.
Noemí Pérez-Díaz David Ruano-Ordás Florentino Fdez-Riverola José R. Méndez 《Expert systems with applications》2012,39(16):12487-12500
Tragedy of Commons Theory introduced by Hardin (1968) revealed how shared and limited resources get completely depleted as effect of human behaviour. By analogy, common spamming activities can be properly modelled by this solid theory and, consequently, a young Internet Security Industry has recently emerged to fight against spam. However, the massive intensification of spam deliveries during last years has led to the need of achieving a significant improvement in filter accuracy. In this context, current research efforts are mainly focussed on providing a wide variety of content-based techniques able to overcome common spam filtering inconveniencies. Although theoretical filtering evaluation is generally taken into consideration in scientific works, most of the evaluation protocols are not appropriate to correctly assess the performance of models during filter operation in real environments. In order to cover the gap between basic research and applied deployment of well-known spam filtering techniques, this work proposes a novel straightforward evaluation methodology able to rank available models using four different but complementary perspectives: static, dynamic, adaptive and internationalisation. In the present study, we applied our SDAI methodology to compare eight different well-known content-based spam filtering techniques using several established accuracy measures. Results showed the effect of the knowledge grain-size and evidenced several unexpected situations related with the behaviour of analysed models. 相似文献