首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Ergonomics》2012,55(11):1425-1437
This paper discusses the need for designers of process plant supervisory systems to make greater effort in anticipating the information that operators require to carry out their duties. A method for dealing with this problem of information requirements specification in process plant design is proposed. This method translates a task analysis into a set of standard task elements from which standard sets of information, called ‘sub-goal templates’ can be derived. The resultant information requirements specification sets out the operators' information needs in the context of the operating goals that have to be attained. Early trials with the method indicate its promise, but highlight the need for its implementation in a computer tool to assist the designer. The features of such a tool and the further work necessary to develop and test the method are described.  相似文献   

2.
The use of the computing with words paradigm for the automatic text documents categorization problem is discussed. This specific problem of information retrieval (IR) becomes more and more important, notably in view of a fast proliferation of textual information available on the Internet. The main issues that have to be addressed here are: document representation and classification. The use of fuzzy logic for both problems has already been quite deeply studied though for the latter, i.e., classification, generally not in an IR context. Our approach is based mainly on the classical calculus of linguistically quantified propositions proposed by Zadeh. Moreover, we employ results related to fuzzy (linguistic) queries in IR, notably various interpretations of the weights of query terms. Some preliminary results on widely adopted text corpora are presented.  相似文献   

3.
This paper describes an efficient approach to representation and analysis of complex networks. Typically, an n node system is represented by an n × n connection matrix. This paper presents a new connection matrix representation scheme that uses three fields; “begin node”, “end node”, and “component id” to represent each node in the network. The proposed approach to connection matrix representation is more concise than the n × n matrix, which is often sparsely populated. This paper also describes network simplification algorithm based on the revised connection matrix. The algorithm when applied to a large system with 55 tie-sets reduced the network to a single tie-set.  相似文献   

4.
Due to the steady increase in the number of heterogeneous types of location information on the internet, it is hard to organize a complete overview of the geospatial information for the tasks of knowledge acquisition related to specific geographic locations. The text- and photo-types of geographical dataset contain numerous location data, such as location-based tourism information, therefore defining high dimensional spaces of attributes that are highly correlated. In this work, we utilized text- and photo-types of location information with a novel approach of information fusion that exploits effective image annotation and location based text-mining approaches to enhance identification of geographic location and spatial cognition. In this paper, we describe our feature extraction methods to annotating images, and utilizing text mining approach to analyze images and texts simultaneously, in order to carry out geospatial text mining and image classification tasks. Subsequently, photo-images and textual documents are projected to a unified feature space, in order to generate a co-constructed semantic space for information fusion. Also, we employed text mining approaches to classify documents into various categories based upon their geospatial features, with the aims to discovering relationships between documents and geographical zones. The experimental results show that the proposed method can effectively enhance the tasks of location based knowledge discovery.  相似文献   

5.
6.
《Information Fusion》2009,10(1):25-50
In today’s fast paced military operational environment, vast amounts of information must be sorted out and fused not only to allow commanders to make situation assessments, but also to support the generation of hypotheses about enemy force disposition and enemy intent. Current information fusion technology has the following two limitations. First, current approaches do not consider the battlefield context as a first class entity. In contrast, we consider situational context in terms of terrain analysis and inference. Second, there are no integrated and implemented models of the high-level fusion process. This paper describes the HiLIFE (High-Level Information Fusion Environment) computational framework for seamless integration of high levels of fusion (levels 2, 3 and 4). The crucial components of HiLIFE that we present in this paper are: (1) multi-sensor fusion algorithms and their performance results that operate in heterogeneous sensor networks to determine not only single targets but also force aggregates, (2) computational approaches for terrain-based analysis and inference that automatically combine low-level terrain features (such as forested areas, rivers, etc.) and additional information, such as weather, and transforms them into high-level militarily relevant abstractions, such as NO-GO, SLOW-GO areas, avenues of approach, and engagement areas, (3) a model for inferring adversary intent by mapping sensor readings of opponent forces to possible opponent goals and actions, and (4) sensor management for positioning intelligence collection assets for further data acquisition. The HiLIFE framework closes the loop on information fusion by specifying how the different components can computationally work together in a coherent system. Furthermore, the framework is inspired by a military process, the Intelligence Preparation of the Battlefield, that grounds the framework in practice. HiLIFE is integrated with a distributed military simulation system, OTBSAF, and the RETSINA multi-agent infrastructure to provide agile and sophisticated reasoning. In addition, the paper presents validation results of the automated terrain analysis that were obtained through experiments using military intelligence Subject Matter Experts (SMEs).  相似文献   

7.
A method for automating the process of system decomposition is described. The method is based on a formal specification scheme, formal definition of good decomposition, heuristic rules governing the search for good candidate decompositions, and a measure of complexity that allows ranking of the candidate decompositions. The decomposition method has been implemented as a set of experimental computerized systems analysis tools and applied to a standard problem for which other designs already exist. The results are encouraging, in that decompositions generated using other methodologies map easily into those suggested by the computerized tools. Additionally, the use of the method indicates that when more than one `good' decomposition is suggested by the system, the specifications might have been incomplete. That is, the computerized tools can identify areas where more information should be sought by analysis  相似文献   

8.
9.
10.
11.
This paper discusses the main differences between humanistic and mechanistic business process modeling. While the mechanistic approach requires strict process formalization, emphasizes technical details, and constrains the modeling task to technology experts, the humanistic approach is more centered on the end-user. We developed a modeling approach and a collaborative tool supporting end-user business process modeling. Design storyboards were adopted as a paradigm for knowledge representation and visual composition. The main contributions from this research include the knowledge representation structure and a collaborative tool supporting visual composition of business process models.  相似文献   

12.
In this paper, we consider numerical techniques which enable us to verify the existence of solutions for a general obstacle problem using computers. We describe the numerical verification algorithm for solving a two dimensional obstacle problem and report a numerical result.  相似文献   

13.
黄丽雯  钱微 《计算机应用》2006,26(11):2626-2627,2630
提出了一种对HITS算法进行改进的新方法,本方法将文档内容与一些启发信息如“短语”,“句子长度”和“首句优先”等结合,用于发现多文档子主题,并且将文档子主题特征转换成图节点进行排序。通过对DUC2004数据的实验,结果显示本方法是一种有效的多文本摘要方法。  相似文献   

14.
Support for information, knowledge exchange and share is a key issue in the information society. Coming into contact with global sharing of resources, mutual well-understood knowledge of intellectual property is worthy of attention. However, there is a lack of systematic information-modeling methodology for the issue; closely connected to this problem is that most of the intelligent legal systems are undesirable and ineffective for multinational semantic mapping of article information. We propose an ontology-guided approach that provides semantic primitive representation of legal information with intention perspective. The domain ontology we developed is used as a fundamental conceptual framework to maintain the consistency among diverse legal representation.  相似文献   

15.
The growth of the internet information delivery has made automatic text categorization essential. This investigation explores the challenges of multi-class text categorization using one-against-one fuzzy support vector machine with Reuter’s news as the example data. The performances of four different membership functions on one-against-one fuzzy support vector machine are measured using the macro-average performance indices. Analytical results indicate that the proposed method achieves a comparable or better performance than the one-against-one support vector machine.  相似文献   

16.
An information retrieval process to aid in the analysis of code clones   总被引:1,自引:1,他引:0  
The advent of new static analysis tools has automated the searching for code clones, which are duplicated or similar code fragments in a program. However, clone detection tools can report many clones if the source code that is being searched is large. Programmers may have difficulty comprehending the extensive results from the detection tool, which may inhibit the ability to maintain the identified clones. Latent Semantic Indexing (LSI) is an information retrieval technique that attempts to find relationships in a corpus based on the analysis of the documents in the corpus and the terms in the documents. In this paper, LSI is used to cluster clone classes that have been identified initially by a clone detection tool. The goal of this paper is to detect trends and associations among the clustered clone classes and determine if they provide further comprehension to assist in the maintenance of clones. Experimental evaluation of the approach is reported from a sequence of tools that are chained together to perform an analysis of clones detected in the Microsoft Windows NT kernel source code.
Jeff GrayEmail:

Robert Tairas   is a Ph.D. student in the Department of Computer and Information Sciences at the University of Alabama at Birmingham (UAB) and a member of the Software Composition and Modeling (SoftCom) laboratory. His research interests include code clone analysis and model-driven engineering. He received an MS in Computer Science from UAB in 2005. Jeff Gray   is an Associate Professor in the Department of Computer and Information Sciences at UAB where he co-directs the Software Composition and Modeling (SoftCom) laboratory. He received the Ph.D. in Computer Science from Vanderbilt University, and a MS and BS in Computer Science from West Virginia University. Jeff’s research interests include model-driven engineering, aspect-oriented software development, and generative programming. He is a 2007 NSF CAREER award winner and current Chair of the Alabama IEEE Computer Society.   相似文献   

17.
一种不良信息过滤的文本预处理方法研究   总被引:1,自引:0,他引:1  
目前互联网上含有不良内容的文本信息形式多变,本文主要针对不良内容的敏感信息出现的特征变化,提出一种基于文本内容的不良信息过滤的文本预处理方案,并着重探讨了其结构变化的敏感信息的识别及解决的方法。研究表明在文本的分词处理前,对不良信息形式的变化进行预处理,能够提高过滤的效率。  相似文献   

18.
This paper describes an efficient combinatorial method for simplification of topological features in a 3D scalar function. The Morse-Smale complex, which provides a succinct representation of a function's associated gradient flow field, is used to identify topological features and their significance. The simplification process, guided by the Morse-Smale complex, proceeds by repeatedly applying two atomic operations that each remove a pair of critical points from the complex. Efficient storage of the complex results in execution of these atomic operations at interactive rates. Visualization of the simplified complex shows that the simplification preserves significant topological features while removing small features and noise.  相似文献   

19.
Tool wear is an important criterion in metal cutting affecting part quality, chip formation and the economics of the cutting process. In order to account for tool wear adequately in tool and process design, simulation tools predicting tool wear in metal cutting processes are required. Within this paper, an advanced simulation approach is presented, coupling FE simulations of chip formation with a user-defined subroutine which extends the functionalities of the commercial FE code for wear simulation laying the focus on the development of this method. The continuous process of wearing is discretized in finite steps and the wear rate is modelled to be constant between. Based on the Usui wear rate equation, the local thermo-mechanical load obtained by FE simulation is transformed into local wear rates. The geometric representation of the wear progress is implemented via shifting of the finite element nodes of the engaged tool domain. A novel iterative procedure of updating the tool geometry in order to account for the wear progress is presented.  相似文献   

20.
Due to the advantages of pay-on-demand, expand-on-demand and high availability, cloud databases (CloudDB) have been widely used in information systems. However, since a CloudDB is distributed on an untrusted cloud side, it is an important problem how to effectively protect massive private information in the CloudDB. Although traditional security strategies (such as identity authentication and access control) can prevent illegal users from accessing unauthorized data, they cannot prevent internal users at the cloud side from accessing and exposing personal privacy information. In this paper, we propose a client-based approach to protect personal privacy in a CloudDB. In the approach, privacy data before being stored into the cloud side, would be encrypted using a traditional encryption algorithm, so as to ensure the security of privacy data. To execute various kinds of query operations over the encrypted data efficiently, the encrypted data would be also augmented with additional feature index, so that as much of each query operation as possible can be processed on the cloud side without the need to decrypt the data. To this end, we explore how the feature index of privacy data is constructed, and how a query operation over privacy data is transformed into a new query operation over the index data so that it can be executed on the cloud side correctly. The effectiveness of the approach is demonstrated by theoretical analysis and experimental evaluation. The results show that the approach has good performance in terms of security, usability and efficiency, thus effective to protect personal privacy in the CloudDB.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号