首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
As an application of artificial intelligence and expert system technology to database design,this paper presents an intelligent design tool NITDT,which comprises a requirements specification language NITSL,a knowledge representation language NITKL,and an inference engine with uncertainty reasoning capability.NITDT now covers the requirements analysis and conceptual design of database design.However,it is possible to be integrated with another database design tool,NITDBA,developed also at NIT to become an integrated design tool supporting the whole process of database design.  相似文献   

3.
郭平  程代杰 《计算机科学》2003,30(11):40-43
As the base of intelligent system, it is very important to guarantee the consistency and non-redundancy of knowledge in knowledge database. Since the variety of knowledge sources, it is necessary to dispose knowledge with redundancy, inclusion and even contradiction during the integration of knowledge database. This paper researches the integration method based on the multi-knowledge database. Firstly, it finds out the inconsistent knowledge sets between the knowledge databases by rough set classification and presents one method eliminating the inconsistency by test data. Then, it regards consistent knowledge sets as the initial population of genetic calculation and constructs a genetic adaptive function based on accuracy, practicability and spreadability of knowledge representation to carry on the genetic calculation. Lastly, classifying the results of genetic calculation reduces the knowledge redundancy of knowledge database. This paper also presents a frameworkfor knowledge database integration based on the rough set classification and genetic algorithm.  相似文献   

4.
With the quick increase of information and knowledge, automatically classifying text documents is becoming a hotspot of knowledge management. A critical capability of knowledge management systems is to classify the text documents into different categories, which are meaningful to users. In this paper, a text topic classification model based on domain ontology by using Vector Space Model is proposed. Eigenvectors as the input to the vector space model are constructed by utilizing concepts and hierarchical structure of ontology, which also provides the domain knowledge. However, a limited vocabulary problem is encountered while mapping keywords to their corresponding ontology concepts. A synonymy lexicon is utilized to extend the ontology and compress the eigenvector. The problem that eigenvectors are too large and complex to be calculated in traditional methods can be solved. At last, combing the concept's supporting, a top-down method according to the ontology structure is used to complete topic classification. An experimental system is implemented and the model is applied to this practical system. Test results show that this model is feasible.  相似文献   

5.
This paper develops a state-based regression method for planning domains with sensing operators and a representation of the knowledge of the planning agent. The language includes primitive actions, sensing actions, and conditional plans. The regression operator is direct in that it does not depend on a progression operator for its formulation. We prove the soundness and completeness of the regression formulation with respect to the definition of progression and the semantics of a propositional modal logic of knowledge. The approach is illustrated with a running example that can not be handled by related methods that utilize an approximation of knowledge instead of the full semantics of knowledge as is used here. It is our expectation that this work will serve as the foundation for the extension of work on state-based regression planning to include sensing and knowledge as well.  相似文献   

6.
Entity linking(EL)systems aim to link entity mentions in the document to their corresponding entity records in a reference knowledge base.Existing EL approaches usually ignore the semantic correlation between the mentions in the text,and are limited to the scale of the local knowledge base.In this paper,we propose a novel graphranking collective Chinese entity linking(GRCCEL)algorithm,which can take advantage of both the structured relationship between entities in the local knowledge base and the additional background information offered by external knowledge sources.By improved weighted word2vec textual similarity and improved PageRank algorithm,more semantic information and structural information can be captured in the document.With an incremental evidence mining process,more powerful discrimination capability for similar entities can be obtained.We evaluate the performance of our algorithm on some open domain corpus.Experimental results show the effectiveness of our method in Chinese entity linking task and demonstrate the superiority of our method over state-of-the-art methods.  相似文献   

7.
A Frame Based Architecture for Information Integration in CIMS   总被引:1,自引:0,他引:1       下载免费PDF全文
This paper foumulates and architecture for information integration in computer integrated manufacturing systems(CIMS).The architecture takes the frame structure as single link among applications and between applications and physical storage.All the advantages in form features based intgrated systems can be found in the frame-based architecture as the frame structrue here takes from features as its primitives.But other advantage,e.g.,default knowledge and dynamic domain knowledge can be attached to frames and the frame structure is easy to be changed and extended,which cannot be found ing form reatures based systems,can also be showed in frame based architectures as the frame structure is a typical knowledge representation scheme in artificial intelligence and many researches and interests have put on it.  相似文献   

8.
Knowledge-based modeling is a trend in complex system modeling technology. To extract the process knowledge from an information system, an approach of knowledge modeling based on interval-valued fuzzy rough set is presented in this paper, in which attribute reduction is a key to obtain the simplified knowledge model. Through defining dependency and inclusion functions, algorithms for attribute reduction and rule extraction are obtained. The approximation inference plays an important role in the development of the fuzzy system. To improve the inference mechanism, we provide a method of similaritybased inference in an interval-valued fuzzy environment. larity based approximate reasoning, an inference result is Combining the conventional compositional rule of inference with simideduced via rule translation, similarity matching, relation modification, and projection operation. This approach is applied to the problem of predicting welding distortion in marine structures, and the experimental results validate the effectiveness of the proposed methods of knowledge modeling and similarity-based inference.  相似文献   

9.
This paper propsed a novel text representation and matching scheme for Chinese text retrieval.At present,the indexing methods of chinese retrieval systems are either character-based or word-based.The character-based indexing methods,such as bi-gram or tri-gram indexing,have high false drops due to the mismatches between queries and documents.On the other hand,it‘s difficult to efficiently identify all the proper nouns,terminology of different domains,and phrases in the word-based indexing systems.The new indexing method uses both proximity and mutual information of the word paris to represent the text content so as to overcome the high false drop,new word and phrase problems that exist in the character-based and word-based systems.The evaluation results indicate that the average query precision of proximity-based indexing is 5.2% higher than the best results of TREC-5.  相似文献   

10.
It is an urgent task to hnplemeut a lot of expert systems to capture the valuable expertise ofexperienced doctors of traditional Chinese medicine.In order to meet the needs,a software tool isdeveloped.It features a unified diagnosis model,a specially designed knowledge representationlanguage and an efficient but effective inference engine.To implement an expert system,it isonly necessary to input the expert's knowledge expressed in knowledge representation languagewithout the design of any additional software.The time and effort required for implementing anexpert system are thus greatly saved.The software is very compact and can run onmicrocomputers e.g.IBM-PC/XT.Two traditional Chinese medical expert systems have beensuccessfully implemented with the tool.  相似文献   

11.
RAO logic for multiagent framework   总被引:2,自引:0,他引:2       下载免费PDF全文
In this paper,we deal with how agents reason about knowledge of others in multiagent system.We first present a knowledge representation framework called reasoning about others(RAO) which is designed specifically to represent concepts and rules used in reasoning about knowledge of others.From a class of sentences usually taken by people in daily life to reason about others,a rule called position exchange principle(PEP)is abstracted.PEP is described as an axiom scheme in RAO and regarded as a basic rule for agents to reason about others,and further it has the similar form and role to modus ponens and(K) axion of knowledge logic.The relationship between speech acts and common sense is also discussed which is necessary for RAO.Based on ideas from situation calculus,this relationship is characterized by an axiom schema in RAO.Our theories are also demonstrated by an example.  相似文献   

12.
Based on α-cut sets representation of fuzzy set ,this paper has proposed an easily quantifiable approach to generalize Bayesian networks under fuzzy a prior and fuzzy sample data. The combination of bayesian statistics and fuzzy set theory will improve advanced fuzzy control to include stochastic information processing and higher level control knowledge representation. Also the approach can facilitate more natural and wider scope knowledge representation in Bayesian networks in general.  相似文献   

13.
Preface          下载免费PDF全文
The International Conference on Knowledge Science, Engineering and Management (KSEM), which was held in Belfast, Northern Ireland in 2010, was the fourth success in the series of such conferences. The conference focuses on the three themes of knowledge science, engineering and management, covering a wider range of research topics in the KSEM-related areas. This event offered an invaluable opportunity to bring together researchers, engineers and practitioners to present original work, the latest advances on knowledge representation, pioneered knowledge engineering, knowledge related systems, as well as discuss and debate practical challenges in deploying knowledge-based systems and research opportunities in the research community. To highlight research activities drawn by this event and provide an insight into the latest developments in the related areas, we present this special issue which is dedicated to all the delegates and researchers in the community who made the conference a success. We selected more than 10 papers that were presented in the conference and asked the authors to extend these papers. After careful review, we finally select 6 papers to be included in this special issue. This collection of extended versions of papers consists of the KSEM2010''s finest papers and covers the prominent topics of knowledge representation and reasoning, ontology engineering and applications, data mining and knowledge discovery. They represent state of the art of research in KSEM-related research areas.  相似文献   

14.
With increasing popularity of cloud-based data management, improving the performance of queries in the cloud is an urgent issue to solve. Summary of data distribution and statistical information has been commonly used in traditional databases to support query optimization, and histograms are of particular interest. Naturally, histograms could be used to support query optimization and efficient utilization of computing resources in the cloud. Histograms could provide helpful reference information for generating optimal query plans, and generate basic statistics useful for guaranteeing the load balance of query processing in the cloud. Since it is too expensive to construct an exact histogram on massive data, building an approximate histogram is a more feasible solution. This problem, however, is challenging to solve in the cloud environment because of the special data organization and processing mode in the cloud. In this paper, we present HEDC++, an extended histogram estimator for data in the cloud, which provides efficient approximation approaches for both equi-width and equi-depth histograms. We design the histogram estimate workflow based on an extended MapReduce framework, and propose novel sampling mechanisms to leverage the sampling efficiency and estimate accuracy. We experimentally validate our techniques on Hadoop and the results demonstrate that HEDC++ can provide promising histogram estimate for massive data in the cloud.  相似文献   

15.
16.
This research takes the view that the modelling of temporal data is a fundamental step towards the solution of capturing semantics of time.The problems inherent in the modelling of time are not unique to database processing.The representation of temporal knowledge and temporal reasoning arises in a wide range of other disciplines.In this paper an account is given of a technique for modelling the semantics of temporal data and its associated normalization method.It discusses the techniques of processing temporal data by employing a Time Sequence (TS) data model.It shows a number of different strategies which are used to classify different data properties of temporal data,and it goes on to develop the model of temporal data and addresses issues of temporal data application design by introducing the concept of temporal data normalisation.  相似文献   

17.
One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n^2 × m) (n is the number of the objects and m the number of the attributes of an information system). This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem, with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.  相似文献   

18.
With the rapid increase of the short message service in China, the information query technology based on the Chinese natural language is becoming a research hotspot at present. An algorithm of the Chinese natural language understanding based on certain domain knowledge is proposed, and it is applied to a Chinese short message based information query system. The algorithm is divided into three interrelated parts: word segmentation, syntax analysis and construction of SQL command. The domain knowledge is introduced into the word segmentation part, which simplifies the Chinese semantic understanding. In the syntax analysis part, a syntax analysis technology is integrated with a semantic analysis technology by associating one symbol in grammar rule with an entity and a field of an application database. A syntax tree in the semantic database is constructed as a middle format of SQL transformation and a SQL command is formed by searching the syntax tree in priority of the depth of the syntax tree.  相似文献   

19.
Knowledge engineering stems from E. A. Figenbaum's proposal in 1977, but it will enter a new decade with the new challenges. This paper first summarizes three knowledge engineering experiments we have undertaken to show possibility of separating knowledge development from intelligent software development. We call it the ICAX mode of intelligent application software generation. The key of this mode is to generate knowledge base, which is the source of intelligence of ICAX software, independently and parallel to intelligent software development. That gives birth to a new and more general concept "knowware". Knowware is a commercialized knowledge module with documentation and intellectual property, which is computer operable, but free of any built-in control mechanism, meeting some industrial standards and embeddable in software/hardware. The process of development, application and management of knowware is called knowware engineering. Two different knowware life cycle models are discussed: the furnace model and the crystallization model. Knowledge middleware is a class of software functioning in all aspects of knowware life cycle models. Finally, this paper also presents some examples of building knowware in the domain of information system engineering.  相似文献   

20.
It is important for the rapid visualization of large scale forest scene to dynamically simplify and recombine model data. In order to preserve the geometric features and visual perception of tree model, this paper presents a real-time information recombination method of complex 3D tree model based on visual perception. This method adopts visual attention model and the visual characteristic of tree structures, and then uses geometry-based and image-based methods to simplify tree models and construct a hybrid representation model. The hybrid representation model reflects the visual perception features of 3D tree models that can embody topological semantics in dynamic simulation. In addition, this method automatically extracts the representation information of 3D tree model based on visual perception, and recombines model information in real time according to the dynamic viewpoint of virtual scene. Finally, this method is applied in the simplification of different tree models, and it is compared with the existing tree model simplification methods. Experimental results show that this method can not only preserve better visual perception for 3D tree models, but also effectively decrease the geometric data of forest scene, and improve the rendering efficiency of forest scene.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号