首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
2.
Sensor networks, communication and financial networks, web and social networks are becoming increasingly important in our day-to-day life. They contain entities which may interact with one another. These interactions are often characterized by a form of autocorrelation, where the value of an attribute at a given entity depends on the values at the entities it is interacting with. In this situation, the collective inference paradigm offers a unique opportunity to improve the performance of predictive models on network data, as interacting instances are labeled simultaneously by dealing with autocorrelation. Several recent works have shown that collective inference is a powerful paradigm, but it is mainly developed with a fully-labeled training network. In contrast, while it may be cheap to acquire the network topology, it may be costly to acquire node labels for training. In this paper, we examine how to explicitly consider autocorrelation when performing regression inference within network data. In particular, we study the transduction of collective regression when a sparsely labeled network is a common situation. We present an algorithm, called CORENA (COllective REgression in Network dAta), to assign a numeric label to each instance in the network. In particular, we iteratively augment the representation of each instance with instances sharing correlated representations across the network. In this way, the proposed learning model is able to capture autocorrelations of labels over a group of related instances and feed-back the more reliable labels predicted by the transduction in the labeled network. Empirical studies demonstrate that the proposed approach can boost regression performances in several spatial and social tasks.  相似文献   

3.
4.
现阶段科研项目类型多样、执行流程复杂,并且与经费管理和采购管理相挂钩,涉及管理部门较广,因此在项 目管理过程中较难开展和落实。为促进科研单位能够更好地开展科研工作,有序地执行科研项目,完成各项指标,本文在充分 研究科研项目特点后,提出建设科研管理系统,从项目的合同、经费、采购、知识管理等各方面进行建设,采用流行的JFinal 框 架,建成易扩展、轻量级的信息管理系统,综合保障科研项目的实施。  相似文献   

5.
The paper introduces an approach for transforming methods and knowledge between different engineering fields through general discrete mathematical models, called graph representations, which carry engineering knowledge of specific systems. The idea is demonstrated by showing the transformation of the known method in planetary gear trains—the Willis method—to two other engineering systems: linkages and trusses. In doing so, two efficient methods were derived: one for analysing compound linkages, such as those containing tetrads, and another for compound trusses. These new methods were derived from two relations characterising graph representations: a representation that is common to two engineering fields and the duality relation between representations. The new approach underlying these transformations is shown to open new ways of conducting engineering research by enabling a systematic derivation of engineering knowledge through knowledge transformations between the graph representations.  相似文献   

6.
Instance-based prediction of real-valued attributes   总被引:2,自引:0,他引:2  
Instance-based representations have been applied to numerous classification tasks with some success. Most of these applications involved predicting a symbolic class based on observed attributes. This paper presents an instance-based method for predicting a numeric value based on observed attributes. We prove that, given enough instances, if the numeric values are generated by continuous functions with bounded slope, then the predicted values are accurate approximations of the actual values. We demonstrate the utility of this approach by comparing it with a standard approach for value prediction. The instance-based approach requires neither ad hoc parameters nor background knowledge.  相似文献   

7.
借鉴聚类思想和万有引力计算方法,提出了解决基于示例学习中两个关键问题的新思路,这两个新思路分别是,利用示例邻近同类其它示例数目来描述该示例潜在预测能力,以及利用实例质量来帮助更加准确地预测新实例类别。据此构造了一种聚类型基于示例学习的新方法,并利用标准机器学习数据库中3个复杂数据样本,对所提方法的性能进行实验检测,有关的对比实验结果表明,所提方法在实例预测能力以及学习结果占用空间有效性方面,均优越其它多种基于示范学习方法。  相似文献   

8.
9.
A two-part experiment investigated human computer interface (HCI) experts' organization of declarative knowledge about the HCI. In Part 1, two groups of experts in HCI design—human factors experts and software development experts—and a control group of non-experts sorted 50 HCI concepts concerned with display, control, interaction, data manipulation and user knowledge into categories. In the second part of the experiment, the three groups judged the similarity of two sets of HCI concepts related to display and interaction, respectively. The data were transformed into measures of psychological distance and were analyzed using Pathfinder, which generates network representations of the data, and multidimensional scaling (MDS), which fits the concepts in a multidimensional space. The Pathfinder networks from the first part of the experiment differed in organization between the two expert groups, with human factors experts' networks consisting of highly interrelated subnetworks and software experts' networks consisting of central nodes and fewer, less interconnected sub-networks. The networks also differed across groups in concepts linked with such concepts as graphics, natural language, function keys and speech recognition. The networks of both expert groups showed much greater organization than did the non-experts' network. The network and MDS representations of the concepts for the two expert groups showed somewhat greater agreement in Part 2 than in Part 1. However, the MDS representations from Part 2 suggested that software experts organized their concepts on dimensions related to technology, implementation and user characteristics, whereas the human factors experts' organized their concepts more uniformly according to user characteristics. The discussion focuses on (1) the differences in cognitive models as a function of the amount and type of HCI design experience and (2) the role of cognitive models in HCI design and in communications within a multidisciplinary design team.  相似文献   

10.
This paper argues the need for more effective: human-computer interactions; design of such interactions; and research to support such design. More effective research for design would result in more effective human-computer interactions. One contribution to more effective research would be the specification of relations between research and the design of human-computer interactions. The aim of this paper is to propose such a specification. Frameworks for specifying relations are proposed for: disciplines; the human-computer interaction (HCI) general design problem; and validation. The frameworks are used to model, and so to specify, the relations: between HCI research and the HCI general design problem; and within the particular scope of HCI, to support HCI research. Together, the models specify the relations between HCI research and the design of human-computer interactions. Meeting these specifications renders HCI knowledge coherent, complete and “fit-for-design-purpose”. An illustration of the relations, thus specified, is provided by a model of the planning and control of multiple task work in medical reception and its hypothetical application. The same frameworks are also used to specify the relations between Cognitive Science and the understanding of natural and artificial forms of intelligence. Lastly, they are further used to identify the relations not specified between Cognitive Science and the design of human-computer interactions. The absence of such relations renders Cognitive Science knowledge not coherent, complete nor “fit-for-design-purpose” (as opposed to “fit-for-understanding-purpose”). It is proposed how the relations specified for HCI and Cognitive Science might be used in the assessment of relations between other research and the design of human-computer interactions. Finally, the paper recommends that such an assessment should be undertaken by any discipline, such as Cognitive Science, which claims a relation between its research and the design of human-computer interactions. Such an assessment would establish whether or not such relations are, or can be, specified. The paper concludes that specification of relations is required for more effective research support for the design of human-computer interactions.  相似文献   

11.
12.
Long- and medium-term production planning are tools to match production orders with resource capacity and that can also be used as a baseline for material procurement. The lack of a detailed schedule for the manufacturing operations, however, may cause difficulties in providing a proper material requirements planning and may affect the feasibility of the production plan itself. This paper proposes an approach, based on production process knowledge, to extract scheduling information from an aggregate production plan in order to support material procurement. The proposed approach is applied to an industrial case involving machining center production.  相似文献   

13.
以服装行业为研究和应用背景,基于本体模型的知识表示、数据建模等技术,介绍了本体模型的开发方法,为服装领域建构本体模型提供了通用的开发模式。通过本体开发过程的基本步骤,讲述了定义类层次和类的属性和实例的基本规则,并通过基本的实例加以说明。尤其针对开发服装本体过程中遇到的问题,提出了初步的解决方案。  相似文献   

14.
Using roles in object‐oriented design leads to a more natural representation of a given problem domain. Despite a lot of research into role–based systems, there is still a gap between conceptual representations of roles and the usage of roles in strongly typed object‐oriented programming languages such as C++ or Java. Since these languages associate classes and their instances exclusively and permanently, representing evolving objects that may take on different roles over time is difficult without special support: (i) entities must be reclassified any time they evolve and (ii) class hierarchies may grow exponentially if entities may take on several independent roles. This article shows how role hierarchies can be easily implemented in Java. It introduces the Java Role Package, which provides a set of classes to support handling of evolving objects without modifying the semantics of Java itself. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

15.
A semantic information alignment method is proposed to align the representations used in building information models (BIMs) to the representations used in energy regulations. Compared to existing alignment efforts, which are either manual or semi-automated, the proposed method aims to automate the alignment process for supporting fully automated energy compliance checking. A first-level simple alignment method is proposed to align single design information instances to single regulatory concepts, in which (1) domain knowledge is used for interpreting the meaning of concepts to recognize candidate instances, and (2) deep learning is used for capturing the semantics behind the words to measure semantic similarity and select the matches. A final complex alignment method is proposed to recognize the instance groups belonging to a regulatory requirement, in which (1) supervised and unsupervised searching algorithms are used to identify the instance pairs, and (2) network modeling is used to group and link the instance pairs to the requirement. The proposed method showed 93.4% recall and 94.7% precision on the testing data.  相似文献   

16.
Danilo Montesi 《Knowledge》1996,9(8):809-507
Heterogeneous knowledge representation allows combination of several knowledge representation techniques. For instance, connectionist and symbolic systems are two different computational paradigms and knowledge representations. Unfortunately, the integration of different paradigms and knowledge representations is not easy and very often is informal. In this paper, we propose a formal approach to integrate these two paradigms where as a symbolic system we consider a (logic) rule-based system. The integration is operated at language level between neural networks and rule languages. The formal model that allows the integration is based on constraint logic programming and provides an integrated framework to represent and process heterogeneous knowledge. In order to achieve this we define a new language that allows expression and modelling in a natural and intuitive way the above issues together with the operational semantics.  相似文献   

17.
Capturing propositional logic, constraint satisfaction problems and systems of polynomial equations, we introduce the notion of systems with finite instantiation by partial assignments, fipa-systems for short, which are independent of special representations of problem instances, but which are based on an axiomatic approach with instantiation (or substitution) by partial assignments as the fundamental notion. Fipa-systems seem to constitute the most general framework allowing for a theory of resolution with nontrivial upper and lower bounds. For every fipa-system we generalise relativised hierarchies originating from generalised Horn formulas [14,26,33,43], and obtain hierarchies of problem instances with recognition and satisfiability decision in polynomial time and linear space, quasi-automatising relativised and generalised tree resolution and utilising a general “quasi-tight” lower bound for tree resolution. And generalising width-restricted resolution from [7,14,25,33], for every fipa-system a (stronger) family of hierarchies of unsatisfiable instances with polynomial time recognition is introduced, weakly automatising relativised and generalised full resolution and utilising a general lower bound for full resolution generalising [7,17,25,33].  相似文献   

18.
One-vs-One strategy is a common and established technique in Machine Learning to deal with multi-class classification problems. It consists of dividing the original multi-class problem into easier-to-solve binary subproblems considering each possible pair of classes. Since several classifiers are learned, their combination becomes crucial in order to predict the class of new instances. Due to the division procedure a series of difficulties emerge at this stage, such as the non-competence problem. Each classifier is learned using only the instances of its corresponding pair of classes, and hence, it is not competent to classify instances belonging to the rest of the classes; nevertheless, at classification time all the outputs of the classifiers are taken into account because the competence cannot be known a priori (the classification problem would be solved). On this account, we develop a distance-based combination strategy, which weights the competence of the outputs of the base classifiers depending on the closeness of the query instance to each one of the classes. Our aim is to reduce the effect of the non-competent classifiers, enhancing the results obtained by the state-of-the-art combinations for One-vs-One strategy. We carry out a thorough experimental study, supported by the proper statistical analysis, showing that the results obtained by the proposed method outperform, both in terms of accuracy and kappa measures, the previous combinations for One-vs-One strategy.  相似文献   

19.
Software defect prediction has been regarded as one of the crucial tasks to improve software quality by effectively allocating valuable resources to fault-prone modules. It is necessary to have a sufficient set of historical data for building a predictor. Without a set of sufficient historical data within a company, cross-project defect prediction (CPDP) can be employed where data from other companies are used to build predictors. In such cases, a transfer learning technique, which extracts common knowledge from source projects and transfers it to a target project, can be used to enhance the prediction performance. There exists the class imbalance problem, which causes difficulties for the learner to predict defects. The main impacts of imbalanced data under cross-project settings have not been investigated in depth. We propose a transfer cost-sensitive boosting method that considers both knowledge transfer and class imbalance for CPDP when given a small amount of labeled target data. The proposed approach performs boosting that assigns weights to the training instances with consideration of both distributional characteristics and the class imbalance. Through comparative experiments with the transfer learning and the class imbalance learning techniques, we show that the proposed model provides significantly higher defect detection accuracy while retaining better overall performance. As a result, a combination of transfer learning and class imbalance learning is highly effective for improving the prediction performance under cross-project settings. The proposed approach will help to design an effective prediction model for CPDP. The improved defect prediction performance could help to direct software quality assurance activities and reduce costs. Consequently, the quality of software can be managed effectively.  相似文献   

20.
This paper proposes a new approach to the schema translation problem. We deal with schemas whose metaschemas are instances of the OMG’s MOF. Most metaschemas can be defined as an instance of the MOF; therefore, our approach is widely applicable. We leverage the well-known object-oriented concepts embedded in the MOF and its instances (object types, attributes, relationship types, operations, IsA hierarchies, refinements, invariants, pre- and postconditions, etc.) to define metaschemas, schemas and their translations.The main contribution of our approach is the extensive use of object-oriented concepts in the definition of translation mappings, particularly the use of operations (and their refinements) and invariants, both of which are formalized in OCL. Our translation mappings can be used to check that two schemas are translations of each other, and to translate one into the other, in both directions. The translation mappings are declaratively defined by means of pre- and postconditions and invariants, and they can be implemented in any suitable language. From an implementation point of view, by taking a MOF-based approach we have a wide set of tools available, including tools that execute OCL. By way of example, we have defined all schemas and metaschemas in this paper and executed all the OCL expressions in the USE tool.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号